00:00:00.000 Started by upstream project "autotest-per-patch" build number 132811 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.057 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:05.484 The recommended git tool is: git 00:00:05.485 using credential 00000000-0000-0000-0000-000000000002 00:00:05.487 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:05.502 Fetching changes from the remote Git repository 00:00:05.504 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:05.516 Using shallow fetch with depth 1 00:00:05.516 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:05.516 > git --version # timeout=10 00:00:05.527 > git --version # 'git version 2.39.2' 00:00:05.527 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:05.540 Setting http proxy: proxy-dmz.intel.com:911 00:00:05.540 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:11.429 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:11.442 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:11.455 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:11.455 > git config core.sparsecheckout # timeout=10 00:00:11.468 > git read-tree -mu HEAD # timeout=10 00:00:11.488 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:11.509 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:11.509 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:11.671 [Pipeline] Start of Pipeline 00:00:11.681 [Pipeline] library 00:00:11.682 Loading library shm_lib@master 00:00:11.682 Library shm_lib@master is cached. Copying from home. 00:00:11.695 [Pipeline] node 00:00:26.697 Still waiting to schedule task 00:00:26.697 Waiting for next available executor on ‘vagrant-vm-host’ 00:15:25.005 Running on VM-host-SM38 in /var/jenkins/workspace/nvme-vg-autotest_3 00:15:25.007 [Pipeline] { 00:15:25.019 [Pipeline] catchError 00:15:25.021 [Pipeline] { 00:15:25.037 [Pipeline] wrap 00:15:25.046 [Pipeline] { 00:15:25.054 [Pipeline] stage 00:15:25.057 [Pipeline] { (Prologue) 00:15:25.075 [Pipeline] echo 00:15:25.077 Node: VM-host-SM38 00:15:25.084 [Pipeline] cleanWs 00:15:25.111 [WS-CLEANUP] Deleting project workspace... 00:15:25.111 [WS-CLEANUP] Deferred wipeout is used... 00:15:25.119 [WS-CLEANUP] done 00:15:25.346 [Pipeline] setCustomBuildProperty 00:15:25.449 [Pipeline] httpRequest 00:15:25.841 [Pipeline] echo 00:15:25.843 Sorcerer 10.211.164.112 is alive 00:15:25.852 [Pipeline] retry 00:15:25.855 [Pipeline] { 00:15:25.869 [Pipeline] httpRequest 00:15:25.875 HttpMethod: GET 00:15:25.876 URL: http://10.211.164.112/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:15:25.877 Sending request to url: http://10.211.164.112/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:15:25.879 Response Code: HTTP/1.1 200 OK 00:15:25.880 Success: Status code 200 is in the accepted range: 200,404 00:15:25.881 Saving response body to /var/jenkins/workspace/nvme-vg-autotest_3/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:15:26.024 [Pipeline] } 00:15:26.042 [Pipeline] // retry 00:15:26.049 [Pipeline] sh 00:15:26.335 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:15:26.352 [Pipeline] httpRequest 00:15:26.739 [Pipeline] echo 00:15:26.741 Sorcerer 10.211.164.112 is alive 00:15:26.752 [Pipeline] retry 00:15:26.754 [Pipeline] { 00:15:26.769 [Pipeline] httpRequest 00:15:26.775 HttpMethod: GET 00:15:26.776 URL: http://10.211.164.112/packages/spdk_1ae735a5d13f736acb1895cd8146266345791321.tar.gz 00:15:26.776 Sending request to url: http://10.211.164.112/packages/spdk_1ae735a5d13f736acb1895cd8146266345791321.tar.gz 00:15:26.777 Response Code: HTTP/1.1 200 OK 00:15:26.778 Success: Status code 200 is in the accepted range: 200,404 00:15:26.778 Saving response body to /var/jenkins/workspace/nvme-vg-autotest_3/spdk_1ae735a5d13f736acb1895cd8146266345791321.tar.gz 00:15:29.162 [Pipeline] } 00:15:29.179 [Pipeline] // retry 00:15:29.186 [Pipeline] sh 00:15:29.469 + tar --no-same-owner -xf spdk_1ae735a5d13f736acb1895cd8146266345791321.tar.gz 00:15:32.789 [Pipeline] sh 00:15:33.137 + git -C spdk log --oneline -n5 00:15:33.137 1ae735a5d nvme: add poll_group interrupt callback 00:15:33.137 f80471632 nvme: add spdk_nvme_poll_group_get_fd_group() 00:15:33.137 969b360d9 thread: fd_group-based interrupts 00:15:33.137 851f166ec thread: move interrupt allocation to a function 00:15:33.137 c12cb8fe3 util: add method for setting fd_group's wrapper 00:15:33.158 [Pipeline] writeFile 00:15:33.173 [Pipeline] sh 00:15:33.465 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:15:33.480 [Pipeline] sh 00:15:33.766 + cat autorun-spdk.conf 00:15:33.766 SPDK_RUN_FUNCTIONAL_TEST=1 00:15:33.766 SPDK_TEST_NVME=1 00:15:33.766 SPDK_TEST_FTL=1 00:15:33.766 SPDK_TEST_ISAL=1 00:15:33.766 SPDK_RUN_ASAN=1 00:15:33.766 SPDK_RUN_UBSAN=1 00:15:33.766 SPDK_TEST_XNVME=1 00:15:33.766 SPDK_TEST_NVME_FDP=1 00:15:33.766 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:15:33.773 RUN_NIGHTLY=0 00:15:33.775 [Pipeline] } 00:15:33.790 [Pipeline] // stage 00:15:33.805 [Pipeline] stage 00:15:33.807 [Pipeline] { (Run VM) 00:15:33.820 [Pipeline] sh 00:15:34.108 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:15:34.108 + echo 'Start stage prepare_nvme.sh' 00:15:34.108 Start stage prepare_nvme.sh 00:15:34.108 + [[ -n 0 ]] 00:15:34.108 + disk_prefix=ex0 00:15:34.108 + [[ -n /var/jenkins/workspace/nvme-vg-autotest_3 ]] 00:15:34.108 + [[ -e /var/jenkins/workspace/nvme-vg-autotest_3/autorun-spdk.conf ]] 00:15:34.108 + source /var/jenkins/workspace/nvme-vg-autotest_3/autorun-spdk.conf 00:15:34.108 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:15:34.108 ++ SPDK_TEST_NVME=1 00:15:34.108 ++ SPDK_TEST_FTL=1 00:15:34.108 ++ SPDK_TEST_ISAL=1 00:15:34.108 ++ SPDK_RUN_ASAN=1 00:15:34.108 ++ SPDK_RUN_UBSAN=1 00:15:34.108 ++ SPDK_TEST_XNVME=1 00:15:34.108 ++ SPDK_TEST_NVME_FDP=1 00:15:34.108 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:15:34.108 ++ RUN_NIGHTLY=0 00:15:34.108 + cd /var/jenkins/workspace/nvme-vg-autotest_3 00:15:34.108 + nvme_files=() 00:15:34.108 + declare -A nvme_files 00:15:34.108 + backend_dir=/var/lib/libvirt/images/backends 00:15:34.108 + nvme_files['nvme.img']=5G 00:15:34.108 + nvme_files['nvme-cmb.img']=5G 00:15:34.108 + nvme_files['nvme-multi0.img']=4G 00:15:34.108 + nvme_files['nvme-multi1.img']=4G 00:15:34.108 + nvme_files['nvme-multi2.img']=4G 00:15:34.108 + nvme_files['nvme-openstack.img']=8G 00:15:34.108 + nvme_files['nvme-zns.img']=5G 00:15:34.108 + (( SPDK_TEST_NVME_PMR == 1 )) 00:15:34.108 + (( SPDK_TEST_FTL == 1 )) 00:15:34.108 + nvme_files["nvme-ftl.img"]=6G 00:15:34.108 + (( SPDK_TEST_NVME_FDP == 1 )) 00:15:34.108 + nvme_files["nvme-fdp.img"]=1G 00:15:34.108 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:15:34.108 + for nvme in "${!nvme_files[@]}" 00:15:34.108 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi2.img -s 4G 00:15:34.108 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:15:34.108 + for nvme in "${!nvme_files[@]}" 00:15:34.108 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-ftl.img -s 6G 00:15:34.370 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:15:34.370 + for nvme in "${!nvme_files[@]}" 00:15:34.370 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-cmb.img -s 5G 00:15:34.370 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:15:34.632 + for nvme in "${!nvme_files[@]}" 00:15:34.632 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-openstack.img -s 8G 00:15:34.632 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:15:34.632 + for nvme in "${!nvme_files[@]}" 00:15:34.632 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-zns.img -s 5G 00:15:35.202 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:15:35.202 + for nvme in "${!nvme_files[@]}" 00:15:35.202 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi1.img -s 4G 00:15:35.463 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:15:35.463 + for nvme in "${!nvme_files[@]}" 00:15:35.463 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi0.img -s 4G 00:15:35.726 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:15:35.726 + for nvme in "${!nvme_files[@]}" 00:15:35.726 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-fdp.img -s 1G 00:15:35.989 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:15:35.989 + for nvme in "${!nvme_files[@]}" 00:15:35.989 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme.img -s 5G 00:15:36.558 Formatting '/var/lib/libvirt/images/backends/ex0-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:15:36.558 ++ sudo grep -rl ex0-nvme.img /etc/libvirt/qemu 00:15:36.558 + echo 'End stage prepare_nvme.sh' 00:15:36.558 End stage prepare_nvme.sh 00:15:36.571 [Pipeline] sh 00:15:36.855 + DISTRO=fedora39 00:15:36.855 + CPUS=10 00:15:36.855 + RAM=12288 00:15:36.855 + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:15:36.855 Setup: -n 10 -s 12288 -x -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex0-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex0-nvme.img -b /var/lib/libvirt/images/backends/ex0-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex0-nvme-multi1.img:/var/lib/libvirt/images/backends/ex0-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex0-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:15:36.855 00:15:36.855 DIR=/var/jenkins/workspace/nvme-vg-autotest_3/spdk/scripts/vagrant 00:15:36.855 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest_3/spdk 00:15:36.855 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest_3 00:15:36.855 HELP=0 00:15:36.855 DRY_RUN=0 00:15:36.855 NVME_FILE=/var/lib/libvirt/images/backends/ex0-nvme-ftl.img,/var/lib/libvirt/images/backends/ex0-nvme.img,/var/lib/libvirt/images/backends/ex0-nvme-multi0.img,/var/lib/libvirt/images/backends/ex0-nvme-fdp.img, 00:15:36.855 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:15:36.855 NVME_AUTO_CREATE=0 00:15:36.855 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex0-nvme-multi1.img:/var/lib/libvirt/images/backends/ex0-nvme-multi2.img,, 00:15:36.855 NVME_CMB=,,,, 00:15:36.855 NVME_PMR=,,,, 00:15:36.855 NVME_ZNS=,,,, 00:15:36.855 NVME_MS=true,,,, 00:15:36.855 NVME_FDP=,,,on, 00:15:36.855 SPDK_VAGRANT_DISTRO=fedora39 00:15:36.855 SPDK_VAGRANT_VMCPU=10 00:15:36.855 SPDK_VAGRANT_VMRAM=12288 00:15:36.855 SPDK_VAGRANT_PROVIDER=libvirt 00:15:36.855 SPDK_VAGRANT_HTTP_PROXY= 00:15:36.855 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:15:36.855 SPDK_OPENSTACK_NETWORK=0 00:15:36.855 VAGRANT_PACKAGE_BOX=0 00:15:36.855 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest_3/spdk/scripts/vagrant/Vagrantfile 00:15:36.855 FORCE_DISTRO=true 00:15:36.855 VAGRANT_BOX_VERSION= 00:15:36.855 EXTRA_VAGRANTFILES= 00:15:36.855 NIC_MODEL=e1000 00:15:36.855 00:15:36.855 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest_3/fedora39-libvirt' 00:15:36.855 /var/jenkins/workspace/nvme-vg-autotest_3/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest_3 00:15:39.397 Bringing machine 'default' up with 'libvirt' provider... 00:15:39.971 ==> default: Creating image (snapshot of base box volume). 00:15:39.971 ==> default: Creating domain with the following settings... 00:15:39.971 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1733785038_7bf9d5ac8e49436cb05f 00:15:39.971 ==> default: -- Domain type: kvm 00:15:39.971 ==> default: -- Cpus: 10 00:15:39.971 ==> default: -- Feature: acpi 00:15:39.971 ==> default: -- Feature: apic 00:15:39.971 ==> default: -- Feature: pae 00:15:39.971 ==> default: -- Memory: 12288M 00:15:39.972 ==> default: -- Memory Backing: hugepages: 00:15:39.972 ==> default: -- Management MAC: 00:15:39.972 ==> default: -- Loader: 00:15:39.972 ==> default: -- Nvram: 00:15:39.972 ==> default: -- Base box: spdk/fedora39 00:15:39.972 ==> default: -- Storage pool: default 00:15:39.972 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1733785038_7bf9d5ac8e49436cb05f.img (20G) 00:15:39.972 ==> default: -- Volume Cache: default 00:15:39.972 ==> default: -- Kernel: 00:15:39.972 ==> default: -- Initrd: 00:15:39.972 ==> default: -- Graphics Type: vnc 00:15:39.972 ==> default: -- Graphics Port: -1 00:15:39.972 ==> default: -- Graphics IP: 127.0.0.1 00:15:39.972 ==> default: -- Graphics Password: Not defined 00:15:39.972 ==> default: -- Video Type: cirrus 00:15:39.972 ==> default: -- Video VRAM: 9216 00:15:39.972 ==> default: -- Sound Type: 00:15:39.972 ==> default: -- Keymap: en-us 00:15:39.972 ==> default: -- TPM Path: 00:15:39.972 ==> default: -- INPUT: type=mouse, bus=ps2 00:15:39.972 ==> default: -- Command line args: 00:15:39.972 ==> default: -> value=-device, 00:15:39.972 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:15:39.972 ==> default: -> value=-drive, 00:15:39.972 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:15:39.972 ==> default: -> value=-device, 00:15:39.972 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:15:39.972 ==> default: -> value=-device, 00:15:39.972 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:15:39.972 ==> default: -> value=-drive, 00:15:39.972 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme.img,if=none,id=nvme-1-drive0, 00:15:39.972 ==> default: -> value=-device, 00:15:39.972 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:15:39.972 ==> default: -> value=-device, 00:15:39.972 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:15:39.972 ==> default: -> value=-drive, 00:15:39.972 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:15:39.972 ==> default: -> value=-device, 00:15:39.972 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:15:39.972 ==> default: -> value=-drive, 00:15:39.972 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:15:39.972 ==> default: -> value=-device, 00:15:39.972 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:15:39.972 ==> default: -> value=-drive, 00:15:39.972 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:15:39.972 ==> default: -> value=-device, 00:15:39.972 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:15:39.972 ==> default: -> value=-device, 00:15:39.972 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:15:39.972 ==> default: -> value=-device, 00:15:39.972 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:15:39.972 ==> default: -> value=-drive, 00:15:39.972 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:15:39.972 ==> default: -> value=-device, 00:15:39.972 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:15:39.972 ==> default: Creating shared folders metadata... 00:15:39.972 ==> default: Starting domain. 00:15:41.884 ==> default: Waiting for domain to get an IP address... 00:15:56.797 ==> default: Waiting for SSH to become available... 00:15:56.797 ==> default: Configuring and enabling network interfaces... 00:16:00.098 default: SSH address: 192.168.121.223:22 00:16:00.098 default: SSH username: vagrant 00:16:00.098 default: SSH auth method: private key 00:16:01.472 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest_3/spdk/ => /home/vagrant/spdk_repo/spdk 00:16:08.036 ==> default: Mounting SSHFS shared folder... 00:16:09.408 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest_3/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:16:09.408 ==> default: Checking Mount.. 00:16:10.352 ==> default: Folder Successfully Mounted! 00:16:10.352 00:16:10.352 SUCCESS! 00:16:10.352 00:16:10.352 cd to /var/jenkins/workspace/nvme-vg-autotest_3/fedora39-libvirt and type "vagrant ssh" to use. 00:16:10.352 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:16:10.352 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest_3/fedora39-libvirt" to destroy all trace of vm. 00:16:10.352 00:16:10.360 [Pipeline] } 00:16:10.375 [Pipeline] // stage 00:16:10.384 [Pipeline] dir 00:16:10.385 Running in /var/jenkins/workspace/nvme-vg-autotest_3/fedora39-libvirt 00:16:10.386 [Pipeline] { 00:16:10.399 [Pipeline] catchError 00:16:10.401 [Pipeline] { 00:16:10.414 [Pipeline] sh 00:16:10.694 + vagrant ssh-config --host vagrant 00:16:10.694 + sed -ne '/^Host/,$p' 00:16:10.694 + tee ssh_conf 00:16:13.266 Host vagrant 00:16:13.266 HostName 192.168.121.223 00:16:13.266 User vagrant 00:16:13.266 Port 22 00:16:13.266 UserKnownHostsFile /dev/null 00:16:13.266 StrictHostKeyChecking no 00:16:13.266 PasswordAuthentication no 00:16:13.266 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:16:13.266 IdentitiesOnly yes 00:16:13.266 LogLevel FATAL 00:16:13.266 ForwardAgent yes 00:16:13.266 ForwardX11 yes 00:16:13.266 00:16:13.279 [Pipeline] withEnv 00:16:13.281 [Pipeline] { 00:16:13.295 [Pipeline] sh 00:16:13.572 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash 00:16:13.572 source /etc/os-release 00:16:13.572 [[ -e /image.version ]] && img=$(< /image.version) 00:16:13.572 # Minimal, systemd-like check. 00:16:13.572 if [[ -e /.dockerenv ]]; then 00:16:13.572 # Clear garbage from the node'\''s name: 00:16:13.572 # agt-er_autotest_547-896 -> autotest_547-896 00:16:13.572 # $HOSTNAME is the actual container id 00:16:13.572 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:16:13.572 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:16:13.572 # We can assume this is a mount from a host where container is running, 00:16:13.572 # so fetch its hostname to easily identify the target swarm worker. 00:16:13.572 container="$(< /etc/hostname) ($agent)" 00:16:13.572 else 00:16:13.572 # Fallback 00:16:13.572 container=$agent 00:16:13.572 fi 00:16:13.572 fi 00:16:13.572 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:16:13.572 ' 00:16:13.581 [Pipeline] } 00:16:13.597 [Pipeline] // withEnv 00:16:13.605 [Pipeline] setCustomBuildProperty 00:16:13.620 [Pipeline] stage 00:16:13.622 [Pipeline] { (Tests) 00:16:13.637 [Pipeline] sh 00:16:13.918 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_3/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:16:14.188 [Pipeline] sh 00:16:14.550 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_3/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:16:14.578 [Pipeline] timeout 00:16:14.579 Timeout set to expire in 50 min 00:16:14.581 [Pipeline] { 00:16:14.595 [Pipeline] sh 00:16:14.872 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard' 00:16:15.130 HEAD is now at 1ae735a5d nvme: add poll_group interrupt callback 00:16:15.158 [Pipeline] sh 00:16:15.436 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo' 00:16:15.449 [Pipeline] sh 00:16:15.727 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_3/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:16:15.742 [Pipeline] sh 00:16:16.020 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo' 00:16:16.020 ++ readlink -f spdk_repo 00:16:16.020 + DIR_ROOT=/home/vagrant/spdk_repo 00:16:16.020 + [[ -n /home/vagrant/spdk_repo ]] 00:16:16.020 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:16:16.020 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:16:16.020 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:16:16.020 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:16:16.020 + [[ -d /home/vagrant/spdk_repo/output ]] 00:16:16.020 + [[ nvme-vg-autotest == pkgdep-* ]] 00:16:16.020 + cd /home/vagrant/spdk_repo 00:16:16.020 + source /etc/os-release 00:16:16.020 ++ NAME='Fedora Linux' 00:16:16.020 ++ VERSION='39 (Cloud Edition)' 00:16:16.020 ++ ID=fedora 00:16:16.020 ++ VERSION_ID=39 00:16:16.020 ++ VERSION_CODENAME= 00:16:16.020 ++ PLATFORM_ID=platform:f39 00:16:16.020 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:16:16.020 ++ ANSI_COLOR='0;38;2;60;110;180' 00:16:16.020 ++ LOGO=fedora-logo-icon 00:16:16.020 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:16:16.020 ++ HOME_URL=https://fedoraproject.org/ 00:16:16.020 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:16:16.020 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:16:16.020 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:16:16.020 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:16:16.020 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:16:16.020 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:16:16.020 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:16:16.020 ++ SUPPORT_END=2024-11-12 00:16:16.020 ++ VARIANT='Cloud Edition' 00:16:16.020 ++ VARIANT_ID=cloud 00:16:16.020 + uname -a 00:16:16.020 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:16:16.020 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:16:16.278 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:16.536 Hugepages 00:16:16.536 node hugesize free / total 00:16:16.536 node0 1048576kB 0 / 0 00:16:16.794 node0 2048kB 0 / 0 00:16:16.794 00:16:16.794 Type BDF Vendor Device NUMA Driver Device Block devices 00:16:16.794 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:16:16.794 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:16:16.794 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:16:16.794 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:16:16.794 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:16:16.794 + rm -f /tmp/spdk-ld-path 00:16:16.794 + source autorun-spdk.conf 00:16:16.794 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:16:16.794 ++ SPDK_TEST_NVME=1 00:16:16.794 ++ SPDK_TEST_FTL=1 00:16:16.794 ++ SPDK_TEST_ISAL=1 00:16:16.794 ++ SPDK_RUN_ASAN=1 00:16:16.794 ++ SPDK_RUN_UBSAN=1 00:16:16.794 ++ SPDK_TEST_XNVME=1 00:16:16.794 ++ SPDK_TEST_NVME_FDP=1 00:16:16.794 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:16:16.794 ++ RUN_NIGHTLY=0 00:16:16.794 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:16:16.794 + [[ -n '' ]] 00:16:16.794 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:16:16.794 + for M in /var/spdk/build-*-manifest.txt 00:16:16.794 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:16:16.794 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:16:16.794 + for M in /var/spdk/build-*-manifest.txt 00:16:16.794 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:16:16.794 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:16:16.794 + for M in /var/spdk/build-*-manifest.txt 00:16:16.794 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:16:16.794 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:16:16.794 ++ uname 00:16:16.794 + [[ Linux == \L\i\n\u\x ]] 00:16:16.794 + sudo dmesg -T 00:16:16.794 + sudo dmesg --clear 00:16:16.794 + dmesg_pid=5024 00:16:16.794 + [[ Fedora Linux == FreeBSD ]] 00:16:16.794 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:16:16.794 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:16:16.794 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:16:16.794 + sudo dmesg -Tw 00:16:16.794 + [[ -x /usr/src/fio-static/fio ]] 00:16:16.794 + export FIO_BIN=/usr/src/fio-static/fio 00:16:16.794 + FIO_BIN=/usr/src/fio-static/fio 00:16:16.794 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:16:16.794 + [[ ! -v VFIO_QEMU_BIN ]] 00:16:16.794 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:16:16.794 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:16:16.794 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:16:16.794 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:16:16.794 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:16:16.794 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:16:16.794 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:16:16.794 22:57:55 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:16:16.794 22:57:55 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:16:16.794 22:57:55 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:16:16.794 22:57:55 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVME=1 00:16:16.794 22:57:55 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_FTL=1 00:16:16.794 22:57:55 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_ISAL=1 00:16:16.794 22:57:55 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1 00:16:16.794 22:57:55 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:16:16.794 22:57:55 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_TEST_XNVME=1 00:16:16.794 22:57:55 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NVME_FDP=1 00:16:16.794 22:57:55 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:16:16.794 22:57:55 -- spdk_repo/autorun-spdk.conf@10 -- $ RUN_NIGHTLY=0 00:16:16.794 22:57:55 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:16:16.794 22:57:55 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:16:17.053 22:57:55 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:16:17.053 22:57:55 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:17.053 22:57:55 -- scripts/common.sh@15 -- $ shopt -s extglob 00:16:17.053 22:57:55 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:16:17.053 22:57:55 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:17.053 22:57:55 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:17.053 22:57:55 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:17.053 22:57:55 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:17.053 22:57:55 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:17.053 22:57:55 -- paths/export.sh@5 -- $ export PATH 00:16:17.053 22:57:55 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:17.053 22:57:55 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:16:17.053 22:57:55 -- common/autobuild_common.sh@493 -- $ date +%s 00:16:17.053 22:57:55 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733785075.XXXXXX 00:16:17.053 22:57:55 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733785075.5w8YL4 00:16:17.053 22:57:55 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:16:17.053 22:57:55 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:16:17.053 22:57:55 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:16:17.053 22:57:55 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:16:17.053 22:57:55 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:16:17.053 22:57:55 -- common/autobuild_common.sh@509 -- $ get_config_params 00:16:17.053 22:57:55 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:16:17.053 22:57:55 -- common/autotest_common.sh@10 -- $ set +x 00:16:17.053 22:57:55 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:16:17.053 22:57:55 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:16:17.053 22:57:55 -- pm/common@17 -- $ local monitor 00:16:17.053 22:57:55 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:16:17.053 22:57:55 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:16:17.053 22:57:55 -- pm/common@25 -- $ sleep 1 00:16:17.053 22:57:55 -- pm/common@21 -- $ date +%s 00:16:17.053 22:57:55 -- pm/common@21 -- $ date +%s 00:16:17.053 22:57:55 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733785075 00:16:17.053 22:57:55 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733785075 00:16:17.053 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733785075_collect-vmstat.pm.log 00:16:17.053 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733785075_collect-cpu-load.pm.log 00:16:17.987 22:57:56 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:16:17.987 22:57:56 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:16:17.987 22:57:56 -- spdk/autobuild.sh@12 -- $ umask 022 00:16:17.987 22:57:56 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:16:17.987 22:57:56 -- spdk/autobuild.sh@16 -- $ date -u 00:16:17.987 Mon Dec 9 10:57:56 PM UTC 2024 00:16:17.987 22:57:56 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:16:17.987 v25.01-pre-320-g1ae735a5d 00:16:17.987 22:57:56 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:16:17.987 22:57:56 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:16:17.987 22:57:56 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:16:17.987 22:57:56 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:16:17.987 22:57:56 -- common/autotest_common.sh@10 -- $ set +x 00:16:17.987 ************************************ 00:16:17.987 START TEST asan 00:16:17.987 ************************************ 00:16:17.987 using asan 00:16:17.987 22:57:56 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:16:17.987 00:16:17.987 real 0m0.000s 00:16:17.987 user 0m0.000s 00:16:17.987 sys 0m0.000s 00:16:17.987 22:57:56 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:16:17.987 22:57:56 asan -- common/autotest_common.sh@10 -- $ set +x 00:16:17.987 ************************************ 00:16:17.987 END TEST asan 00:16:17.987 ************************************ 00:16:17.987 22:57:56 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:16:17.987 22:57:56 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:16:17.987 22:57:56 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:16:17.987 22:57:56 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:16:17.987 22:57:56 -- common/autotest_common.sh@10 -- $ set +x 00:16:17.987 ************************************ 00:16:17.987 START TEST ubsan 00:16:17.987 ************************************ 00:16:17.987 using ubsan 00:16:17.987 22:57:56 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:16:17.987 00:16:17.987 real 0m0.000s 00:16:17.987 user 0m0.000s 00:16:17.987 sys 0m0.000s 00:16:17.987 22:57:56 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:16:17.987 22:57:56 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:16:17.987 ************************************ 00:16:17.987 END TEST ubsan 00:16:17.987 ************************************ 00:16:17.987 22:57:56 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:16:17.987 22:57:56 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:16:17.987 22:57:56 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:16:17.987 22:57:56 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:16:17.987 22:57:56 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:16:17.987 22:57:56 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:16:17.987 22:57:56 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:16:17.987 22:57:56 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:16:17.987 22:57:56 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:16:18.245 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:16:18.245 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:16:18.503 Using 'verbs' RDMA provider 00:16:29.403 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:16:39.369 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:16:39.627 Creating mk/config.mk...done. 00:16:39.627 Creating mk/cc.flags.mk...done. 00:16:39.627 Type 'make' to build. 00:16:39.627 22:58:17 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:16:39.627 22:58:17 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:16:39.627 22:58:17 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:16:39.627 22:58:17 -- common/autotest_common.sh@10 -- $ set +x 00:16:39.627 ************************************ 00:16:39.627 START TEST make 00:16:39.627 ************************************ 00:16:39.627 22:58:17 make -- common/autotest_common.sh@1129 -- $ make -j10 00:16:39.884 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:16:39.885 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:16:39.885 meson setup builddir \ 00:16:39.885 -Dwith-libaio=enabled \ 00:16:39.885 -Dwith-liburing=enabled \ 00:16:39.885 -Dwith-libvfn=disabled \ 00:16:39.885 -Dwith-spdk=disabled \ 00:16:39.885 -Dexamples=false \ 00:16:39.885 -Dtests=false \ 00:16:39.885 -Dtools=false && \ 00:16:39.885 meson compile -C builddir && \ 00:16:39.885 cd -) 00:16:39.885 make[1]: Nothing to be done for 'all'. 00:16:42.417 The Meson build system 00:16:42.417 Version: 1.5.0 00:16:42.417 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:16:42.417 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:16:42.417 Build type: native build 00:16:42.417 Project name: xnvme 00:16:42.417 Project version: 0.7.5 00:16:42.417 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:16:42.417 C linker for the host machine: cc ld.bfd 2.40-14 00:16:42.417 Host machine cpu family: x86_64 00:16:42.417 Host machine cpu: x86_64 00:16:42.417 Message: host_machine.system: linux 00:16:42.417 Compiler for C supports arguments -Wno-missing-braces: YES 00:16:42.417 Compiler for C supports arguments -Wno-cast-function-type: YES 00:16:42.417 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:16:42.417 Run-time dependency threads found: YES 00:16:42.417 Has header "setupapi.h" : NO 00:16:42.417 Has header "linux/blkzoned.h" : YES 00:16:42.417 Has header "linux/blkzoned.h" : YES (cached) 00:16:42.417 Has header "libaio.h" : YES 00:16:42.417 Library aio found: YES 00:16:42.417 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:16:42.417 Run-time dependency liburing found: YES 2.2 00:16:42.417 Dependency libvfn skipped: feature with-libvfn disabled 00:16:42.417 Found CMake: /usr/bin/cmake (3.27.7) 00:16:42.417 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 00:16:42.417 Subproject spdk : skipped: feature with-spdk disabled 00:16:42.417 Run-time dependency appleframeworks found: NO (tried framework) 00:16:42.417 Run-time dependency appleframeworks found: NO (tried framework) 00:16:42.417 Library rt found: YES 00:16:42.417 Checking for function "clock_gettime" with dependency -lrt: YES 00:16:42.417 Configuring xnvme_config.h using configuration 00:16:42.417 Configuring xnvme.spec using configuration 00:16:42.417 Run-time dependency bash-completion found: YES 2.11 00:16:42.417 Message: Bash-completions: /usr/share/bash-completion/completions 00:16:42.417 Program cp found: YES (/usr/bin/cp) 00:16:42.417 Build targets in project: 3 00:16:42.417 00:16:42.417 xnvme 0.7.5 00:16:42.417 00:16:42.417 Subprojects 00:16:42.417 spdk : NO Feature 'with-spdk' disabled 00:16:42.417 00:16:42.417 User defined options 00:16:42.417 examples : false 00:16:42.417 tests : false 00:16:42.417 tools : false 00:16:42.417 with-libaio : enabled 00:16:42.417 with-liburing: enabled 00:16:42.417 with-libvfn : disabled 00:16:42.417 with-spdk : disabled 00:16:42.417 00:16:42.417 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:16:42.676 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:16:42.676 [1/76] Generating toolbox/xnvme-driver-script with a custom command 00:16:42.676 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 00:16:42.676 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 00:16:42.676 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 00:16:42.676 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 00:16:42.676 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 00:16:42.676 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 00:16:42.676 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 00:16:42.676 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 00:16:42.676 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 00:16:42.676 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 00:16:42.935 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 00:16:42.935 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 00:16:42.935 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 00:16:42.935 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 00:16:42.935 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 00:16:42.935 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 00:16:42.935 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 00:16:42.935 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 00:16:42.935 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 00:16:42.935 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 00:16:42.935 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 00:16:42.935 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 00:16:42.935 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 00:16:42.935 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 00:16:42.935 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 00:16:42.935 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 00:16:42.935 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 00:16:42.935 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 00:16:42.935 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 00:16:42.935 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 00:16:42.935 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 00:16:42.935 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 00:16:42.935 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 00:16:42.935 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 00:16:42.935 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 00:16:42.935 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 00:16:42.935 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 00:16:42.935 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 00:16:42.935 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 00:16:42.935 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 00:16:42.935 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 00:16:43.192 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 00:16:43.192 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 00:16:43.192 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 00:16:43.192 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 00:16:43.192 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 00:16:43.192 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 00:16:43.192 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 00:16:43.192 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 00:16:43.192 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 00:16:43.192 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 00:16:43.192 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 00:16:43.192 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 00:16:43.192 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 00:16:43.192 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 00:16:43.192 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 00:16:43.192 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 00:16:43.192 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 00:16:43.192 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 00:16:43.192 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 00:16:43.192 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 00:16:43.192 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 00:16:43.192 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 00:16:43.192 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 00:16:43.192 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 00:16:43.192 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 00:16:43.450 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 00:16:43.450 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 00:16:43.450 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 00:16:43.450 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 00:16:43.450 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 00:16:43.450 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 00:16:43.707 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 00:16:43.707 [75/76] Linking static target lib/libxnvme.a 00:16:43.707 [76/76] Linking target lib/libxnvme.so.0.7.5 00:16:43.707 INFO: autodetecting backend as ninja 00:16:43.707 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:16:43.964 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:16:50.535 The Meson build system 00:16:50.535 Version: 1.5.0 00:16:50.535 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:16:50.535 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:16:50.535 Build type: native build 00:16:50.535 Program cat found: YES (/usr/bin/cat) 00:16:50.535 Project name: DPDK 00:16:50.535 Project version: 24.03.0 00:16:50.535 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:16:50.535 C linker for the host machine: cc ld.bfd 2.40-14 00:16:50.535 Host machine cpu family: x86_64 00:16:50.535 Host machine cpu: x86_64 00:16:50.535 Message: ## Building in Developer Mode ## 00:16:50.535 Program pkg-config found: YES (/usr/bin/pkg-config) 00:16:50.535 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:16:50.535 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:16:50.535 Program python3 found: YES (/usr/bin/python3) 00:16:50.535 Program cat found: YES (/usr/bin/cat) 00:16:50.535 Compiler for C supports arguments -march=native: YES 00:16:50.535 Checking for size of "void *" : 8 00:16:50.535 Checking for size of "void *" : 8 (cached) 00:16:50.535 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:16:50.535 Library m found: YES 00:16:50.535 Library numa found: YES 00:16:50.535 Has header "numaif.h" : YES 00:16:50.535 Library fdt found: NO 00:16:50.535 Library execinfo found: NO 00:16:50.535 Has header "execinfo.h" : YES 00:16:50.535 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:16:50.535 Run-time dependency libarchive found: NO (tried pkgconfig) 00:16:50.535 Run-time dependency libbsd found: NO (tried pkgconfig) 00:16:50.535 Run-time dependency jansson found: NO (tried pkgconfig) 00:16:50.535 Run-time dependency openssl found: YES 3.1.1 00:16:50.535 Run-time dependency libpcap found: YES 1.10.4 00:16:50.535 Has header "pcap.h" with dependency libpcap: YES 00:16:50.535 Compiler for C supports arguments -Wcast-qual: YES 00:16:50.535 Compiler for C supports arguments -Wdeprecated: YES 00:16:50.535 Compiler for C supports arguments -Wformat: YES 00:16:50.535 Compiler for C supports arguments -Wformat-nonliteral: NO 00:16:50.535 Compiler for C supports arguments -Wformat-security: NO 00:16:50.535 Compiler for C supports arguments -Wmissing-declarations: YES 00:16:50.535 Compiler for C supports arguments -Wmissing-prototypes: YES 00:16:50.535 Compiler for C supports arguments -Wnested-externs: YES 00:16:50.535 Compiler for C supports arguments -Wold-style-definition: YES 00:16:50.535 Compiler for C supports arguments -Wpointer-arith: YES 00:16:50.535 Compiler for C supports arguments -Wsign-compare: YES 00:16:50.535 Compiler for C supports arguments -Wstrict-prototypes: YES 00:16:50.535 Compiler for C supports arguments -Wundef: YES 00:16:50.535 Compiler for C supports arguments -Wwrite-strings: YES 00:16:50.535 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:16:50.535 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:16:50.535 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:16:50.535 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:16:50.535 Program objdump found: YES (/usr/bin/objdump) 00:16:50.535 Compiler for C supports arguments -mavx512f: YES 00:16:50.535 Checking if "AVX512 checking" compiles: YES 00:16:50.535 Fetching value of define "__SSE4_2__" : 1 00:16:50.535 Fetching value of define "__AES__" : 1 00:16:50.535 Fetching value of define "__AVX__" : 1 00:16:50.535 Fetching value of define "__AVX2__" : 1 00:16:50.535 Fetching value of define "__AVX512BW__" : 1 00:16:50.535 Fetching value of define "__AVX512CD__" : 1 00:16:50.535 Fetching value of define "__AVX512DQ__" : 1 00:16:50.535 Fetching value of define "__AVX512F__" : 1 00:16:50.535 Fetching value of define "__AVX512VL__" : 1 00:16:50.535 Fetching value of define "__PCLMUL__" : 1 00:16:50.535 Fetching value of define "__RDRND__" : 1 00:16:50.535 Fetching value of define "__RDSEED__" : 1 00:16:50.535 Fetching value of define "__VPCLMULQDQ__" : 1 00:16:50.535 Fetching value of define "__znver1__" : (undefined) 00:16:50.535 Fetching value of define "__znver2__" : (undefined) 00:16:50.535 Fetching value of define "__znver3__" : (undefined) 00:16:50.535 Fetching value of define "__znver4__" : (undefined) 00:16:50.535 Library asan found: YES 00:16:50.535 Compiler for C supports arguments -Wno-format-truncation: YES 00:16:50.535 Message: lib/log: Defining dependency "log" 00:16:50.535 Message: lib/kvargs: Defining dependency "kvargs" 00:16:50.535 Message: lib/telemetry: Defining dependency "telemetry" 00:16:50.535 Library rt found: YES 00:16:50.535 Checking for function "getentropy" : NO 00:16:50.535 Message: lib/eal: Defining dependency "eal" 00:16:50.535 Message: lib/ring: Defining dependency "ring" 00:16:50.535 Message: lib/rcu: Defining dependency "rcu" 00:16:50.535 Message: lib/mempool: Defining dependency "mempool" 00:16:50.535 Message: lib/mbuf: Defining dependency "mbuf" 00:16:50.535 Fetching value of define "__PCLMUL__" : 1 (cached) 00:16:50.535 Fetching value of define "__AVX512F__" : 1 (cached) 00:16:50.535 Fetching value of define "__AVX512BW__" : 1 (cached) 00:16:50.535 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:16:50.535 Fetching value of define "__AVX512VL__" : 1 (cached) 00:16:50.535 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:16:50.535 Compiler for C supports arguments -mpclmul: YES 00:16:50.535 Compiler for C supports arguments -maes: YES 00:16:50.535 Compiler for C supports arguments -mavx512f: YES (cached) 00:16:50.535 Compiler for C supports arguments -mavx512bw: YES 00:16:50.535 Compiler for C supports arguments -mavx512dq: YES 00:16:50.535 Compiler for C supports arguments -mavx512vl: YES 00:16:50.535 Compiler for C supports arguments -mvpclmulqdq: YES 00:16:50.535 Compiler for C supports arguments -mavx2: YES 00:16:50.535 Compiler for C supports arguments -mavx: YES 00:16:50.535 Message: lib/net: Defining dependency "net" 00:16:50.535 Message: lib/meter: Defining dependency "meter" 00:16:50.535 Message: lib/ethdev: Defining dependency "ethdev" 00:16:50.535 Message: lib/pci: Defining dependency "pci" 00:16:50.535 Message: lib/cmdline: Defining dependency "cmdline" 00:16:50.535 Message: lib/hash: Defining dependency "hash" 00:16:50.535 Message: lib/timer: Defining dependency "timer" 00:16:50.535 Message: lib/compressdev: Defining dependency "compressdev" 00:16:50.535 Message: lib/cryptodev: Defining dependency "cryptodev" 00:16:50.535 Message: lib/dmadev: Defining dependency "dmadev" 00:16:50.535 Compiler for C supports arguments -Wno-cast-qual: YES 00:16:50.535 Message: lib/power: Defining dependency "power" 00:16:50.535 Message: lib/reorder: Defining dependency "reorder" 00:16:50.535 Message: lib/security: Defining dependency "security" 00:16:50.535 Has header "linux/userfaultfd.h" : YES 00:16:50.535 Has header "linux/vduse.h" : YES 00:16:50.535 Message: lib/vhost: Defining dependency "vhost" 00:16:50.535 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:16:50.535 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:16:50.535 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:16:50.535 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:16:50.535 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:16:50.535 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:16:50.535 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:16:50.535 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:16:50.535 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:16:50.535 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:16:50.535 Program doxygen found: YES (/usr/local/bin/doxygen) 00:16:50.535 Configuring doxy-api-html.conf using configuration 00:16:50.535 Configuring doxy-api-man.conf using configuration 00:16:50.535 Program mandb found: YES (/usr/bin/mandb) 00:16:50.535 Program sphinx-build found: NO 00:16:50.535 Configuring rte_build_config.h using configuration 00:16:50.535 Message: 00:16:50.536 ================= 00:16:50.536 Applications Enabled 00:16:50.536 ================= 00:16:50.536 00:16:50.536 apps: 00:16:50.536 00:16:50.536 00:16:50.536 Message: 00:16:50.536 ================= 00:16:50.536 Libraries Enabled 00:16:50.536 ================= 00:16:50.536 00:16:50.536 libs: 00:16:50.536 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:16:50.536 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:16:50.536 cryptodev, dmadev, power, reorder, security, vhost, 00:16:50.536 00:16:50.536 Message: 00:16:50.536 =============== 00:16:50.536 Drivers Enabled 00:16:50.536 =============== 00:16:50.536 00:16:50.536 common: 00:16:50.536 00:16:50.536 bus: 00:16:50.536 pci, vdev, 00:16:50.536 mempool: 00:16:50.536 ring, 00:16:50.536 dma: 00:16:50.536 00:16:50.536 net: 00:16:50.536 00:16:50.536 crypto: 00:16:50.536 00:16:50.536 compress: 00:16:50.536 00:16:50.536 vdpa: 00:16:50.536 00:16:50.536 00:16:50.536 Message: 00:16:50.536 ================= 00:16:50.536 Content Skipped 00:16:50.536 ================= 00:16:50.536 00:16:50.536 apps: 00:16:50.536 dumpcap: explicitly disabled via build config 00:16:50.536 graph: explicitly disabled via build config 00:16:50.536 pdump: explicitly disabled via build config 00:16:50.536 proc-info: explicitly disabled via build config 00:16:50.536 test-acl: explicitly disabled via build config 00:16:50.536 test-bbdev: explicitly disabled via build config 00:16:50.536 test-cmdline: explicitly disabled via build config 00:16:50.536 test-compress-perf: explicitly disabled via build config 00:16:50.536 test-crypto-perf: explicitly disabled via build config 00:16:50.536 test-dma-perf: explicitly disabled via build config 00:16:50.536 test-eventdev: explicitly disabled via build config 00:16:50.536 test-fib: explicitly disabled via build config 00:16:50.536 test-flow-perf: explicitly disabled via build config 00:16:50.536 test-gpudev: explicitly disabled via build config 00:16:50.536 test-mldev: explicitly disabled via build config 00:16:50.536 test-pipeline: explicitly disabled via build config 00:16:50.536 test-pmd: explicitly disabled via build config 00:16:50.536 test-regex: explicitly disabled via build config 00:16:50.536 test-sad: explicitly disabled via build config 00:16:50.536 test-security-perf: explicitly disabled via build config 00:16:50.536 00:16:50.536 libs: 00:16:50.536 argparse: explicitly disabled via build config 00:16:50.536 metrics: explicitly disabled via build config 00:16:50.536 acl: explicitly disabled via build config 00:16:50.536 bbdev: explicitly disabled via build config 00:16:50.536 bitratestats: explicitly disabled via build config 00:16:50.536 bpf: explicitly disabled via build config 00:16:50.536 cfgfile: explicitly disabled via build config 00:16:50.536 distributor: explicitly disabled via build config 00:16:50.536 efd: explicitly disabled via build config 00:16:50.536 eventdev: explicitly disabled via build config 00:16:50.536 dispatcher: explicitly disabled via build config 00:16:50.536 gpudev: explicitly disabled via build config 00:16:50.536 gro: explicitly disabled via build config 00:16:50.536 gso: explicitly disabled via build config 00:16:50.536 ip_frag: explicitly disabled via build config 00:16:50.536 jobstats: explicitly disabled via build config 00:16:50.536 latencystats: explicitly disabled via build config 00:16:50.536 lpm: explicitly disabled via build config 00:16:50.536 member: explicitly disabled via build config 00:16:50.536 pcapng: explicitly disabled via build config 00:16:50.536 rawdev: explicitly disabled via build config 00:16:50.536 regexdev: explicitly disabled via build config 00:16:50.536 mldev: explicitly disabled via build config 00:16:50.536 rib: explicitly disabled via build config 00:16:50.536 sched: explicitly disabled via build config 00:16:50.536 stack: explicitly disabled via build config 00:16:50.536 ipsec: explicitly disabled via build config 00:16:50.536 pdcp: explicitly disabled via build config 00:16:50.536 fib: explicitly disabled via build config 00:16:50.536 port: explicitly disabled via build config 00:16:50.536 pdump: explicitly disabled via build config 00:16:50.536 table: explicitly disabled via build config 00:16:50.536 pipeline: explicitly disabled via build config 00:16:50.536 graph: explicitly disabled via build config 00:16:50.536 node: explicitly disabled via build config 00:16:50.536 00:16:50.536 drivers: 00:16:50.536 common/cpt: not in enabled drivers build config 00:16:50.536 common/dpaax: not in enabled drivers build config 00:16:50.536 common/iavf: not in enabled drivers build config 00:16:50.536 common/idpf: not in enabled drivers build config 00:16:50.536 common/ionic: not in enabled drivers build config 00:16:50.536 common/mvep: not in enabled drivers build config 00:16:50.536 common/octeontx: not in enabled drivers build config 00:16:50.536 bus/auxiliary: not in enabled drivers build config 00:16:50.536 bus/cdx: not in enabled drivers build config 00:16:50.536 bus/dpaa: not in enabled drivers build config 00:16:50.536 bus/fslmc: not in enabled drivers build config 00:16:50.536 bus/ifpga: not in enabled drivers build config 00:16:50.536 bus/platform: not in enabled drivers build config 00:16:50.536 bus/uacce: not in enabled drivers build config 00:16:50.536 bus/vmbus: not in enabled drivers build config 00:16:50.536 common/cnxk: not in enabled drivers build config 00:16:50.536 common/mlx5: not in enabled drivers build config 00:16:50.536 common/nfp: not in enabled drivers build config 00:16:50.536 common/nitrox: not in enabled drivers build config 00:16:50.536 common/qat: not in enabled drivers build config 00:16:50.536 common/sfc_efx: not in enabled drivers build config 00:16:50.536 mempool/bucket: not in enabled drivers build config 00:16:50.536 mempool/cnxk: not in enabled drivers build config 00:16:50.536 mempool/dpaa: not in enabled drivers build config 00:16:50.536 mempool/dpaa2: not in enabled drivers build config 00:16:50.536 mempool/octeontx: not in enabled drivers build config 00:16:50.536 mempool/stack: not in enabled drivers build config 00:16:50.536 dma/cnxk: not in enabled drivers build config 00:16:50.536 dma/dpaa: not in enabled drivers build config 00:16:50.536 dma/dpaa2: not in enabled drivers build config 00:16:50.536 dma/hisilicon: not in enabled drivers build config 00:16:50.536 dma/idxd: not in enabled drivers build config 00:16:50.536 dma/ioat: not in enabled drivers build config 00:16:50.536 dma/skeleton: not in enabled drivers build config 00:16:50.536 net/af_packet: not in enabled drivers build config 00:16:50.536 net/af_xdp: not in enabled drivers build config 00:16:50.536 net/ark: not in enabled drivers build config 00:16:50.536 net/atlantic: not in enabled drivers build config 00:16:50.536 net/avp: not in enabled drivers build config 00:16:50.536 net/axgbe: not in enabled drivers build config 00:16:50.536 net/bnx2x: not in enabled drivers build config 00:16:50.536 net/bnxt: not in enabled drivers build config 00:16:50.536 net/bonding: not in enabled drivers build config 00:16:50.536 net/cnxk: not in enabled drivers build config 00:16:50.536 net/cpfl: not in enabled drivers build config 00:16:50.536 net/cxgbe: not in enabled drivers build config 00:16:50.536 net/dpaa: not in enabled drivers build config 00:16:50.536 net/dpaa2: not in enabled drivers build config 00:16:50.536 net/e1000: not in enabled drivers build config 00:16:50.536 net/ena: not in enabled drivers build config 00:16:50.536 net/enetc: not in enabled drivers build config 00:16:50.536 net/enetfec: not in enabled drivers build config 00:16:50.536 net/enic: not in enabled drivers build config 00:16:50.536 net/failsafe: not in enabled drivers build config 00:16:50.536 net/fm10k: not in enabled drivers build config 00:16:50.536 net/gve: not in enabled drivers build config 00:16:50.536 net/hinic: not in enabled drivers build config 00:16:50.536 net/hns3: not in enabled drivers build config 00:16:50.536 net/i40e: not in enabled drivers build config 00:16:50.536 net/iavf: not in enabled drivers build config 00:16:50.536 net/ice: not in enabled drivers build config 00:16:50.536 net/idpf: not in enabled drivers build config 00:16:50.536 net/igc: not in enabled drivers build config 00:16:50.536 net/ionic: not in enabled drivers build config 00:16:50.536 net/ipn3ke: not in enabled drivers build config 00:16:50.536 net/ixgbe: not in enabled drivers build config 00:16:50.536 net/mana: not in enabled drivers build config 00:16:50.536 net/memif: not in enabled drivers build config 00:16:50.536 net/mlx4: not in enabled drivers build config 00:16:50.536 net/mlx5: not in enabled drivers build config 00:16:50.536 net/mvneta: not in enabled drivers build config 00:16:50.536 net/mvpp2: not in enabled drivers build config 00:16:50.536 net/netvsc: not in enabled drivers build config 00:16:50.536 net/nfb: not in enabled drivers build config 00:16:50.536 net/nfp: not in enabled drivers build config 00:16:50.536 net/ngbe: not in enabled drivers build config 00:16:50.536 net/null: not in enabled drivers build config 00:16:50.536 net/octeontx: not in enabled drivers build config 00:16:50.536 net/octeon_ep: not in enabled drivers build config 00:16:50.536 net/pcap: not in enabled drivers build config 00:16:50.536 net/pfe: not in enabled drivers build config 00:16:50.536 net/qede: not in enabled drivers build config 00:16:50.536 net/ring: not in enabled drivers build config 00:16:50.536 net/sfc: not in enabled drivers build config 00:16:50.536 net/softnic: not in enabled drivers build config 00:16:50.536 net/tap: not in enabled drivers build config 00:16:50.536 net/thunderx: not in enabled drivers build config 00:16:50.536 net/txgbe: not in enabled drivers build config 00:16:50.536 net/vdev_netvsc: not in enabled drivers build config 00:16:50.536 net/vhost: not in enabled drivers build config 00:16:50.536 net/virtio: not in enabled drivers build config 00:16:50.536 net/vmxnet3: not in enabled drivers build config 00:16:50.536 raw/*: missing internal dependency, "rawdev" 00:16:50.536 crypto/armv8: not in enabled drivers build config 00:16:50.536 crypto/bcmfs: not in enabled drivers build config 00:16:50.536 crypto/caam_jr: not in enabled drivers build config 00:16:50.536 crypto/ccp: not in enabled drivers build config 00:16:50.536 crypto/cnxk: not in enabled drivers build config 00:16:50.536 crypto/dpaa_sec: not in enabled drivers build config 00:16:50.536 crypto/dpaa2_sec: not in enabled drivers build config 00:16:50.536 crypto/ipsec_mb: not in enabled drivers build config 00:16:50.536 crypto/mlx5: not in enabled drivers build config 00:16:50.536 crypto/mvsam: not in enabled drivers build config 00:16:50.536 crypto/nitrox: not in enabled drivers build config 00:16:50.536 crypto/null: not in enabled drivers build config 00:16:50.537 crypto/octeontx: not in enabled drivers build config 00:16:50.537 crypto/openssl: not in enabled drivers build config 00:16:50.537 crypto/scheduler: not in enabled drivers build config 00:16:50.537 crypto/uadk: not in enabled drivers build config 00:16:50.537 crypto/virtio: not in enabled drivers build config 00:16:50.537 compress/isal: not in enabled drivers build config 00:16:50.537 compress/mlx5: not in enabled drivers build config 00:16:50.537 compress/nitrox: not in enabled drivers build config 00:16:50.537 compress/octeontx: not in enabled drivers build config 00:16:50.537 compress/zlib: not in enabled drivers build config 00:16:50.537 regex/*: missing internal dependency, "regexdev" 00:16:50.537 ml/*: missing internal dependency, "mldev" 00:16:50.537 vdpa/ifc: not in enabled drivers build config 00:16:50.537 vdpa/mlx5: not in enabled drivers build config 00:16:50.537 vdpa/nfp: not in enabled drivers build config 00:16:50.537 vdpa/sfc: not in enabled drivers build config 00:16:50.537 event/*: missing internal dependency, "eventdev" 00:16:50.537 baseband/*: missing internal dependency, "bbdev" 00:16:50.537 gpu/*: missing internal dependency, "gpudev" 00:16:50.537 00:16:50.537 00:16:50.537 Build targets in project: 84 00:16:50.537 00:16:50.537 DPDK 24.03.0 00:16:50.537 00:16:50.537 User defined options 00:16:50.537 buildtype : debug 00:16:50.537 default_library : shared 00:16:50.537 libdir : lib 00:16:50.537 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:16:50.537 b_sanitize : address 00:16:50.537 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:16:50.537 c_link_args : 00:16:50.537 cpu_instruction_set: native 00:16:50.537 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:16:50.537 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:16:50.537 enable_docs : false 00:16:50.537 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:16:50.537 enable_kmods : false 00:16:50.537 max_lcores : 128 00:16:50.537 tests : false 00:16:50.537 00:16:50.537 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:16:50.794 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:16:50.795 [1/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:16:50.795 [2/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:16:50.795 [3/267] Linking static target lib/librte_kvargs.a 00:16:50.795 [4/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:16:50.795 [5/267] Linking static target lib/librte_log.a 00:16:51.052 [6/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:16:51.310 [7/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:16:51.310 [8/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:16:51.310 [9/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:16:51.310 [10/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:16:51.310 [11/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:16:51.310 [12/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:16:51.310 [13/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:16:51.310 [14/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:16:51.310 [15/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:16:51.310 [16/267] Linking static target lib/librte_telemetry.a 00:16:51.567 [17/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:16:51.567 [18/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:16:51.824 [19/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:16:51.824 [20/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:16:51.824 [21/267] Linking target lib/librte_log.so.24.1 00:16:51.824 [22/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:16:52.083 [23/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:16:52.083 [24/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:16:52.083 [25/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:16:52.083 [26/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:16:52.083 [27/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:16:52.083 [28/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:16:52.083 [29/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:16:52.083 [30/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:16:52.083 [31/267] Linking target lib/librte_kvargs.so.24.1 00:16:52.083 [32/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:16:52.083 [33/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:16:52.341 [34/267] Linking target lib/librte_telemetry.so.24.1 00:16:52.341 [35/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:16:52.341 [36/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:16:52.341 [37/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:16:52.341 [38/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:16:52.341 [39/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:16:52.341 [40/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:16:52.599 [41/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:16:52.599 [42/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:16:52.599 [43/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:16:52.599 [44/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:16:52.599 [45/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:16:52.599 [46/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:16:52.599 [47/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:16:52.857 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:16:52.857 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:16:52.857 [50/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:16:52.857 [51/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:16:52.857 [52/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:16:53.115 [53/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:16:53.115 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:16:53.115 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:16:53.115 [56/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:16:53.115 [57/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:16:53.115 [58/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:16:53.115 [59/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:16:53.373 [60/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:16:53.373 [61/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:16:53.373 [62/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:16:53.373 [63/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:16:53.373 [64/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:16:53.373 [65/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:16:53.373 [66/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:16:53.631 [67/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:16:53.631 [68/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:16:53.631 [69/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:16:53.631 [70/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:16:53.631 [71/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:16:53.888 [72/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:16:53.888 [73/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:16:53.888 [74/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:16:53.888 [75/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:16:53.888 [76/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:16:53.888 [77/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:16:53.889 [78/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:16:53.889 [79/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:16:54.147 [80/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:16:54.147 [81/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:16:54.147 [82/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:16:54.147 [83/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:16:54.147 [84/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:16:54.147 [85/267] Linking static target lib/librte_eal.a 00:16:54.404 [86/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:16:54.404 [87/267] Linking static target lib/librte_ring.a 00:16:54.404 [88/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:16:54.404 [89/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:16:54.404 [90/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:16:54.405 [91/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:16:54.663 [92/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:16:54.663 [93/267] Linking static target lib/librte_mempool.a 00:16:54.663 [94/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:16:54.663 [95/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:16:54.663 [96/267] Linking static target lib/librte_rcu.a 00:16:54.663 [97/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:16:54.663 [98/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:16:54.920 [99/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:16:54.920 [100/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:16:54.920 [101/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:16:54.920 [102/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:16:54.920 [103/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:16:54.920 [104/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:16:55.178 [105/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:16:55.178 [106/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:16:55.178 [107/267] Linking static target lib/librte_mbuf.a 00:16:55.178 [108/267] Linking static target lib/librte_net.a 00:16:55.178 [109/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:16:55.178 [110/267] Linking static target lib/librte_meter.a 00:16:55.178 [111/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:16:55.178 [112/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:16:55.178 [113/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:16:55.435 [114/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:16:55.435 [115/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:16:55.435 [116/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:16:55.693 [117/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:16:55.693 [118/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:16:55.693 [119/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:16:55.950 [120/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:16:55.950 [121/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:16:56.208 [122/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:16:56.208 [123/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:16:56.208 [124/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:16:56.208 [125/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:16:56.208 [126/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:16:56.208 [127/267] Linking static target lib/librte_pci.a 00:16:56.208 [128/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:16:56.208 [129/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:16:56.208 [130/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:16:56.208 [131/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:16:56.466 [132/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:16:56.466 [133/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:16:56.466 [134/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:16:56.466 [135/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:16:56.466 [136/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:16:56.466 [137/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:16:56.466 [138/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:16:56.466 [139/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:16:56.466 [140/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:16:56.466 [141/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:16:56.466 [142/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:16:56.466 [143/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:16:56.466 [144/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:16:56.723 [145/267] Linking static target lib/librte_cmdline.a 00:16:56.723 [146/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:16:56.723 [147/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:16:56.980 [148/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:16:56.980 [149/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:16:56.980 [150/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:16:56.980 [151/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:16:56.980 [152/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:16:56.980 [153/267] Linking static target lib/librte_timer.a 00:16:57.238 [154/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:16:57.499 [155/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:16:57.499 [156/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:16:57.499 [157/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:16:57.499 [158/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:16:57.499 [159/267] Linking static target lib/librte_compressdev.a 00:16:57.499 [160/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:16:57.499 [161/267] Linking static target lib/librte_ethdev.a 00:16:57.499 [162/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:16:57.760 [163/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:16:57.760 [164/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:16:57.760 [165/267] Linking static target lib/librte_hash.a 00:16:57.760 [166/267] Linking static target lib/librte_dmadev.a 00:16:57.760 [167/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:16:57.760 [168/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:16:58.020 [169/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:16:58.020 [170/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:16:58.020 [171/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:16:58.020 [172/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:16:58.020 [173/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:16:58.281 [174/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:16:58.281 [175/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:16:58.281 [176/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:16:58.281 [177/267] Linking static target lib/librte_cryptodev.a 00:16:58.281 [178/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:16:58.281 [179/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:16:58.281 [180/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:16:58.281 [181/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:16:58.540 [182/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:16:58.540 [183/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:16:58.540 [184/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:16:58.540 [185/267] Linking static target lib/librte_power.a 00:16:58.798 [186/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:16:58.798 [187/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:16:58.798 [188/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:16:58.798 [189/267] Linking static target lib/librte_reorder.a 00:16:58.798 [190/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:16:59.058 [191/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:16:59.058 [192/267] Linking static target lib/librte_security.a 00:16:59.058 [193/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:16:59.317 [194/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:16:59.317 [195/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:16:59.575 [196/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:16:59.575 [197/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:16:59.575 [198/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:16:59.575 [199/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:16:59.575 [200/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:16:59.833 [201/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:16:59.833 [202/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:16:59.833 [203/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:16:59.833 [204/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:17:00.090 [205/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:17:00.090 [206/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:17:00.090 [207/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:17:00.090 [208/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:17:00.090 [209/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:17:00.090 [210/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:17:00.090 [211/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:17:00.349 [212/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:17:00.349 [213/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:17:00.349 [214/267] Linking static target drivers/librte_bus_vdev.a 00:17:00.349 [215/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:17:00.349 [216/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:17:00.349 [217/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:17:00.349 [218/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:17:00.349 [219/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:17:00.349 [220/267] Linking static target drivers/librte_bus_pci.a 00:17:00.607 [221/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:17:00.607 [222/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:17:00.607 [223/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:17:00.607 [224/267] Linking static target drivers/librte_mempool_ring.a 00:17:00.607 [225/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:17:00.607 [226/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:17:01.541 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:17:01.799 [228/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:17:01.799 [229/267] Linking target lib/librte_eal.so.24.1 00:17:02.057 [230/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:17:02.057 [231/267] Linking target lib/librte_meter.so.24.1 00:17:02.057 [232/267] Linking target lib/librte_timer.so.24.1 00:17:02.057 [233/267] Linking target lib/librte_ring.so.24.1 00:17:02.057 [234/267] Linking target drivers/librte_bus_vdev.so.24.1 00:17:02.057 [235/267] Linking target lib/librte_pci.so.24.1 00:17:02.057 [236/267] Linking target lib/librte_dmadev.so.24.1 00:17:02.057 [237/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:17:02.057 [238/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:17:02.057 [239/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:17:02.057 [240/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:17:02.057 [241/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:17:02.057 [242/267] Linking target drivers/librte_bus_pci.so.24.1 00:17:02.057 [243/267] Linking target lib/librte_rcu.so.24.1 00:17:02.057 [244/267] Linking target lib/librte_mempool.so.24.1 00:17:02.315 [245/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:17:02.315 [246/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:17:02.315 [247/267] Linking target lib/librte_mbuf.so.24.1 00:17:02.315 [248/267] Linking target drivers/librte_mempool_ring.so.24.1 00:17:02.315 [249/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:17:02.315 [250/267] Linking target lib/librte_reorder.so.24.1 00:17:02.315 [251/267] Linking target lib/librte_compressdev.so.24.1 00:17:02.315 [252/267] Linking target lib/librte_cryptodev.so.24.1 00:17:02.315 [253/267] Linking target lib/librte_net.so.24.1 00:17:02.573 [254/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:17:02.573 [255/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:17:02.573 [256/267] Linking target lib/librte_cmdline.so.24.1 00:17:02.573 [257/267] Linking target lib/librte_security.so.24.1 00:17:02.573 [258/267] Linking target lib/librte_hash.so.24.1 00:17:02.573 [259/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:17:03.137 [260/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:17:03.137 [261/267] Linking target lib/librte_ethdev.so.24.1 00:17:03.137 [262/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:17:03.138 [263/267] Linking target lib/librte_power.so.24.1 00:17:03.703 [264/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:17:03.703 [265/267] Linking static target lib/librte_vhost.a 00:17:04.638 [266/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:17:04.895 [267/267] Linking target lib/librte_vhost.so.24.1 00:17:04.895 INFO: autodetecting backend as ninja 00:17:04.895 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:17:19.762 CC lib/log/log_flags.o 00:17:19.762 CC lib/log/log.o 00:17:19.762 CC lib/log/log_deprecated.o 00:17:19.762 CC lib/ut_mock/mock.o 00:17:19.762 CC lib/ut/ut.o 00:17:19.762 LIB libspdk_ut_mock.a 00:17:19.762 LIB libspdk_log.a 00:17:19.762 LIB libspdk_ut.a 00:17:19.762 SO libspdk_ut_mock.so.6.0 00:17:19.762 SO libspdk_ut.so.2.0 00:17:19.762 SO libspdk_log.so.7.1 00:17:19.762 SYMLINK libspdk_ut_mock.so 00:17:19.762 SYMLINK libspdk_ut.so 00:17:19.762 SYMLINK libspdk_log.so 00:17:20.020 CXX lib/trace_parser/trace.o 00:17:20.020 CC lib/ioat/ioat.o 00:17:20.020 CC lib/util/base64.o 00:17:20.020 CC lib/util/cpuset.o 00:17:20.020 CC lib/util/crc16.o 00:17:20.020 CC lib/util/bit_array.o 00:17:20.020 CC lib/util/crc32.o 00:17:20.020 CC lib/util/crc32c.o 00:17:20.020 CC lib/dma/dma.o 00:17:20.020 CC lib/vfio_user/host/vfio_user_pci.o 00:17:20.020 CC lib/util/crc32_ieee.o 00:17:20.020 CC lib/util/crc64.o 00:17:20.020 CC lib/util/dif.o 00:17:20.305 CC lib/util/fd.o 00:17:20.305 LIB libspdk_dma.a 00:17:20.305 CC lib/util/fd_group.o 00:17:20.305 CC lib/util/file.o 00:17:20.305 SO libspdk_dma.so.5.0 00:17:20.305 CC lib/util/hexlify.o 00:17:20.305 CC lib/util/iov.o 00:17:20.305 SYMLINK libspdk_dma.so 00:17:20.305 CC lib/util/math.o 00:17:20.305 LIB libspdk_ioat.a 00:17:20.305 CC lib/util/net.o 00:17:20.305 SO libspdk_ioat.so.7.0 00:17:20.305 CC lib/util/pipe.o 00:17:20.305 CC lib/vfio_user/host/vfio_user.o 00:17:20.305 SYMLINK libspdk_ioat.so 00:17:20.305 CC lib/util/strerror_tls.o 00:17:20.305 CC lib/util/string.o 00:17:20.305 CC lib/util/uuid.o 00:17:20.305 CC lib/util/xor.o 00:17:20.580 CC lib/util/zipf.o 00:17:20.580 CC lib/util/md5.o 00:17:20.580 LIB libspdk_vfio_user.a 00:17:20.580 SO libspdk_vfio_user.so.5.0 00:17:20.580 SYMLINK libspdk_vfio_user.so 00:17:20.838 LIB libspdk_util.a 00:17:20.838 LIB libspdk_trace_parser.a 00:17:20.838 SO libspdk_trace_parser.so.6.0 00:17:20.838 SO libspdk_util.so.10.1 00:17:20.838 SYMLINK libspdk_trace_parser.so 00:17:20.838 SYMLINK libspdk_util.so 00:17:21.096 CC lib/rdma_utils/rdma_utils.o 00:17:21.096 CC lib/json/json_parse.o 00:17:21.096 CC lib/vmd/vmd.o 00:17:21.096 CC lib/json/json_util.o 00:17:21.096 CC lib/conf/conf.o 00:17:21.096 CC lib/json/json_write.o 00:17:21.096 CC lib/idxd/idxd.o 00:17:21.096 CC lib/vmd/led.o 00:17:21.096 CC lib/env_dpdk/env.o 00:17:21.096 CC lib/idxd/idxd_user.o 00:17:21.096 CC lib/env_dpdk/memory.o 00:17:21.355 LIB libspdk_conf.a 00:17:21.355 CC lib/idxd/idxd_kernel.o 00:17:21.355 CC lib/env_dpdk/pci.o 00:17:21.355 CC lib/env_dpdk/init.o 00:17:21.355 SO libspdk_conf.so.6.0 00:17:21.355 LIB libspdk_rdma_utils.a 00:17:21.355 LIB libspdk_json.a 00:17:21.355 SO libspdk_rdma_utils.so.1.0 00:17:21.355 SYMLINK libspdk_conf.so 00:17:21.355 SO libspdk_json.so.6.0 00:17:21.355 CC lib/env_dpdk/threads.o 00:17:21.355 SYMLINK libspdk_rdma_utils.so 00:17:21.355 SYMLINK libspdk_json.so 00:17:21.355 CC lib/env_dpdk/pci_ioat.o 00:17:21.612 CC lib/env_dpdk/pci_virtio.o 00:17:21.612 CC lib/rdma_provider/common.o 00:17:21.612 CC lib/env_dpdk/pci_vmd.o 00:17:21.612 CC lib/jsonrpc/jsonrpc_server.o 00:17:21.612 CC lib/env_dpdk/pci_idxd.o 00:17:21.612 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:17:21.612 CC lib/env_dpdk/pci_event.o 00:17:21.612 CC lib/rdma_provider/rdma_provider_verbs.o 00:17:21.612 LIB libspdk_idxd.a 00:17:21.612 CC lib/env_dpdk/sigbus_handler.o 00:17:21.612 SO libspdk_idxd.so.12.1 00:17:21.612 CC lib/jsonrpc/jsonrpc_client.o 00:17:21.612 CC lib/env_dpdk/pci_dpdk.o 00:17:21.869 LIB libspdk_vmd.a 00:17:21.869 SYMLINK libspdk_idxd.so 00:17:21.869 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:17:21.869 CC lib/env_dpdk/pci_dpdk_2207.o 00:17:21.869 SO libspdk_vmd.so.6.0 00:17:21.869 CC lib/env_dpdk/pci_dpdk_2211.o 00:17:21.869 LIB libspdk_rdma_provider.a 00:17:21.869 SO libspdk_rdma_provider.so.7.0 00:17:21.869 SYMLINK libspdk_vmd.so 00:17:21.869 SYMLINK libspdk_rdma_provider.so 00:17:21.869 LIB libspdk_jsonrpc.a 00:17:22.127 SO libspdk_jsonrpc.so.6.0 00:17:22.127 SYMLINK libspdk_jsonrpc.so 00:17:22.385 CC lib/rpc/rpc.o 00:17:22.385 LIB libspdk_rpc.a 00:17:22.385 LIB libspdk_env_dpdk.a 00:17:22.643 SO libspdk_rpc.so.6.0 00:17:22.643 SYMLINK libspdk_rpc.so 00:17:22.643 SO libspdk_env_dpdk.so.15.1 00:17:22.643 SYMLINK libspdk_env_dpdk.so 00:17:22.643 CC lib/keyring/keyring.o 00:17:22.643 CC lib/keyring/keyring_rpc.o 00:17:22.643 CC lib/notify/notify.o 00:17:22.643 CC lib/notify/notify_rpc.o 00:17:22.643 CC lib/trace/trace.o 00:17:22.643 CC lib/trace/trace_flags.o 00:17:22.643 CC lib/trace/trace_rpc.o 00:17:22.901 LIB libspdk_notify.a 00:17:22.901 SO libspdk_notify.so.6.0 00:17:22.901 SYMLINK libspdk_notify.so 00:17:22.901 LIB libspdk_trace.a 00:17:22.901 LIB libspdk_keyring.a 00:17:22.901 SO libspdk_trace.so.11.0 00:17:22.901 SO libspdk_keyring.so.2.0 00:17:22.901 SYMLINK libspdk_keyring.so 00:17:22.901 SYMLINK libspdk_trace.so 00:17:23.159 CC lib/sock/sock_rpc.o 00:17:23.159 CC lib/sock/sock.o 00:17:23.159 CC lib/thread/thread.o 00:17:23.159 CC lib/thread/iobuf.o 00:17:23.729 LIB libspdk_sock.a 00:17:23.729 SO libspdk_sock.so.10.0 00:17:23.729 SYMLINK libspdk_sock.so 00:17:23.988 CC lib/nvme/nvme_ns_cmd.o 00:17:23.988 CC lib/nvme/nvme_ctrlr_cmd.o 00:17:23.988 CC lib/nvme/nvme_pcie_common.o 00:17:23.988 CC lib/nvme/nvme_pcie.o 00:17:23.988 CC lib/nvme/nvme.o 00:17:23.988 CC lib/nvme/nvme_ns.o 00:17:23.988 CC lib/nvme/nvme_fabric.o 00:17:23.988 CC lib/nvme/nvme_ctrlr.o 00:17:23.988 CC lib/nvme/nvme_qpair.o 00:17:24.555 CC lib/nvme/nvme_quirks.o 00:17:24.555 CC lib/nvme/nvme_transport.o 00:17:24.555 CC lib/nvme/nvme_discovery.o 00:17:24.813 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:17:24.813 LIB libspdk_thread.a 00:17:24.813 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:17:24.813 CC lib/nvme/nvme_tcp.o 00:17:24.813 SO libspdk_thread.so.11.0 00:17:24.813 CC lib/nvme/nvme_opal.o 00:17:24.813 CC lib/nvme/nvme_io_msg.o 00:17:24.813 SYMLINK libspdk_thread.so 00:17:24.813 CC lib/nvme/nvme_poll_group.o 00:17:25.070 CC lib/nvme/nvme_zns.o 00:17:25.070 CC lib/nvme/nvme_stubs.o 00:17:25.070 CC lib/nvme/nvme_auth.o 00:17:25.070 CC lib/nvme/nvme_cuse.o 00:17:25.328 CC lib/nvme/nvme_rdma.o 00:17:25.328 CC lib/accel/accel.o 00:17:25.328 CC lib/blob/blobstore.o 00:17:25.586 CC lib/init/json_config.o 00:17:25.586 CC lib/init/subsystem.o 00:17:25.586 CC lib/virtio/virtio.o 00:17:25.844 CC lib/virtio/virtio_vhost_user.o 00:17:25.844 CC lib/init/subsystem_rpc.o 00:17:25.844 CC lib/init/rpc.o 00:17:26.102 CC lib/accel/accel_rpc.o 00:17:26.102 CC lib/virtio/virtio_vfio_user.o 00:17:26.102 CC lib/fsdev/fsdev.o 00:17:26.102 LIB libspdk_init.a 00:17:26.102 CC lib/fsdev/fsdev_io.o 00:17:26.102 SO libspdk_init.so.6.0 00:17:26.102 SYMLINK libspdk_init.so 00:17:26.102 CC lib/virtio/virtio_pci.o 00:17:26.102 CC lib/blob/request.o 00:17:26.359 CC lib/blob/zeroes.o 00:17:26.359 CC lib/accel/accel_sw.o 00:17:26.359 CC lib/fsdev/fsdev_rpc.o 00:17:26.359 CC lib/event/app.o 00:17:26.359 CC lib/blob/blob_bs_dev.o 00:17:26.359 LIB libspdk_nvme.a 00:17:26.359 CC lib/event/reactor.o 00:17:26.359 CC lib/event/log_rpc.o 00:17:26.359 LIB libspdk_virtio.a 00:17:26.617 CC lib/event/app_rpc.o 00:17:26.617 SO libspdk_virtio.so.7.0 00:17:26.617 LIB libspdk_accel.a 00:17:26.617 SO libspdk_accel.so.16.0 00:17:26.617 SYMLINK libspdk_virtio.so 00:17:26.617 SO libspdk_nvme.so.15.0 00:17:26.617 CC lib/event/scheduler_static.o 00:17:26.617 SYMLINK libspdk_accel.so 00:17:26.617 LIB libspdk_fsdev.a 00:17:26.876 SO libspdk_fsdev.so.2.0 00:17:26.876 CC lib/bdev/bdev.o 00:17:26.876 CC lib/bdev/bdev_rpc.o 00:17:26.876 CC lib/bdev/part.o 00:17:26.876 CC lib/bdev/scsi_nvme.o 00:17:26.876 CC lib/bdev/bdev_zone.o 00:17:26.876 SYMLINK libspdk_fsdev.so 00:17:26.876 SYMLINK libspdk_nvme.so 00:17:26.876 LIB libspdk_event.a 00:17:26.876 SO libspdk_event.so.14.0 00:17:26.876 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:17:26.876 SYMLINK libspdk_event.so 00:17:27.828 LIB libspdk_fuse_dispatcher.a 00:17:27.828 SO libspdk_fuse_dispatcher.so.1.0 00:17:27.828 SYMLINK libspdk_fuse_dispatcher.so 00:17:28.761 LIB libspdk_blob.a 00:17:28.761 SO libspdk_blob.so.12.0 00:17:28.761 SYMLINK libspdk_blob.so 00:17:29.017 CC lib/blobfs/blobfs.o 00:17:29.017 CC lib/blobfs/tree.o 00:17:29.017 CC lib/lvol/lvol.o 00:17:29.582 LIB libspdk_bdev.a 00:17:29.582 SO libspdk_bdev.so.17.0 00:17:29.582 SYMLINK libspdk_bdev.so 00:17:29.840 LIB libspdk_blobfs.a 00:17:29.840 CC lib/nvmf/ctrlr.o 00:17:29.840 CC lib/ublk/ublk_rpc.o 00:17:29.840 CC lib/ublk/ublk.o 00:17:29.840 CC lib/nvmf/ctrlr_discovery.o 00:17:29.840 CC lib/nvmf/ctrlr_bdev.o 00:17:29.840 CC lib/scsi/dev.o 00:17:29.840 SO libspdk_blobfs.so.11.0 00:17:29.840 CC lib/ftl/ftl_core.o 00:17:29.840 CC lib/nbd/nbd.o 00:17:29.840 SYMLINK libspdk_blobfs.so 00:17:29.840 CC lib/ftl/ftl_init.o 00:17:30.098 CC lib/ftl/ftl_layout.o 00:17:30.098 LIB libspdk_lvol.a 00:17:30.098 CC lib/scsi/lun.o 00:17:30.098 SO libspdk_lvol.so.11.0 00:17:30.098 CC lib/scsi/port.o 00:17:30.098 SYMLINK libspdk_lvol.so 00:17:30.098 CC lib/scsi/scsi.o 00:17:30.098 CC lib/scsi/scsi_bdev.o 00:17:30.098 CC lib/scsi/scsi_pr.o 00:17:30.440 CC lib/ftl/ftl_debug.o 00:17:30.440 CC lib/nbd/nbd_rpc.o 00:17:30.440 CC lib/scsi/scsi_rpc.o 00:17:30.440 CC lib/scsi/task.o 00:17:30.440 CC lib/nvmf/subsystem.o 00:17:30.440 LIB libspdk_nbd.a 00:17:30.440 CC lib/nvmf/nvmf.o 00:17:30.440 SO libspdk_nbd.so.7.0 00:17:30.440 CC lib/ftl/ftl_io.o 00:17:30.440 SYMLINK libspdk_nbd.so 00:17:30.440 CC lib/ftl/ftl_sb.o 00:17:30.440 CC lib/ftl/ftl_l2p.o 00:17:30.440 CC lib/ftl/ftl_l2p_flat.o 00:17:30.440 LIB libspdk_ublk.a 00:17:30.698 LIB libspdk_scsi.a 00:17:30.698 SO libspdk_ublk.so.3.0 00:17:30.698 CC lib/nvmf/nvmf_rpc.o 00:17:30.699 SO libspdk_scsi.so.9.0 00:17:30.699 SYMLINK libspdk_ublk.so 00:17:30.699 CC lib/nvmf/transport.o 00:17:30.699 CC lib/nvmf/tcp.o 00:17:30.699 SYMLINK libspdk_scsi.so 00:17:30.699 CC lib/nvmf/stubs.o 00:17:30.699 CC lib/ftl/ftl_nv_cache.o 00:17:30.958 CC lib/iscsi/conn.o 00:17:30.958 CC lib/vhost/vhost.o 00:17:31.216 CC lib/nvmf/mdns_server.o 00:17:31.216 CC lib/iscsi/init_grp.o 00:17:31.475 CC lib/vhost/vhost_rpc.o 00:17:31.475 CC lib/iscsi/iscsi.o 00:17:31.475 CC lib/nvmf/rdma.o 00:17:31.475 CC lib/ftl/ftl_band.o 00:17:31.475 CC lib/ftl/ftl_band_ops.o 00:17:31.732 CC lib/nvmf/auth.o 00:17:31.732 CC lib/ftl/ftl_writer.o 00:17:31.732 CC lib/vhost/vhost_scsi.o 00:17:31.732 CC lib/ftl/ftl_rq.o 00:17:31.732 CC lib/vhost/vhost_blk.o 00:17:31.732 CC lib/iscsi/param.o 00:17:31.732 CC lib/iscsi/portal_grp.o 00:17:31.989 CC lib/ftl/ftl_reloc.o 00:17:31.989 CC lib/vhost/rte_vhost_user.o 00:17:31.989 CC lib/iscsi/tgt_node.o 00:17:31.989 CC lib/ftl/ftl_l2p_cache.o 00:17:32.246 CC lib/ftl/ftl_p2l.o 00:17:32.246 CC lib/iscsi/iscsi_subsystem.o 00:17:32.504 CC lib/ftl/ftl_p2l_log.o 00:17:32.504 CC lib/ftl/mngt/ftl_mngt.o 00:17:32.504 CC lib/iscsi/iscsi_rpc.o 00:17:32.504 CC lib/iscsi/task.o 00:17:32.504 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:17:32.762 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:17:32.762 CC lib/ftl/mngt/ftl_mngt_startup.o 00:17:32.762 CC lib/ftl/mngt/ftl_mngt_md.o 00:17:32.762 CC lib/ftl/mngt/ftl_mngt_misc.o 00:17:32.762 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:17:32.762 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:17:32.762 CC lib/ftl/mngt/ftl_mngt_band.o 00:17:32.762 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:17:32.763 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:17:33.021 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:17:33.021 LIB libspdk_iscsi.a 00:17:33.021 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:17:33.021 LIB libspdk_vhost.a 00:17:33.021 CC lib/ftl/utils/ftl_conf.o 00:17:33.021 CC lib/ftl/utils/ftl_md.o 00:17:33.021 CC lib/ftl/utils/ftl_mempool.o 00:17:33.021 CC lib/ftl/utils/ftl_bitmap.o 00:17:33.021 SO libspdk_iscsi.so.8.0 00:17:33.021 SO libspdk_vhost.so.8.0 00:17:33.021 CC lib/ftl/utils/ftl_property.o 00:17:33.021 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:17:33.021 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:17:33.021 SYMLINK libspdk_vhost.so 00:17:33.021 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:17:33.021 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:17:33.279 SYMLINK libspdk_iscsi.so 00:17:33.279 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:17:33.279 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:17:33.279 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:17:33.279 CC lib/ftl/upgrade/ftl_sb_v3.o 00:17:33.279 CC lib/ftl/upgrade/ftl_sb_v5.o 00:17:33.279 CC lib/ftl/nvc/ftl_nvc_dev.o 00:17:33.279 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:17:33.279 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:17:33.279 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:17:33.279 CC lib/ftl/base/ftl_base_dev.o 00:17:33.279 CC lib/ftl/base/ftl_base_bdev.o 00:17:33.279 CC lib/ftl/ftl_trace.o 00:17:33.536 LIB libspdk_ftl.a 00:17:33.536 LIB libspdk_nvmf.a 00:17:33.794 SO libspdk_nvmf.so.20.0 00:17:33.794 SO libspdk_ftl.so.9.0 00:17:34.051 SYMLINK libspdk_nvmf.so 00:17:34.051 SYMLINK libspdk_ftl.so 00:17:34.309 CC module/env_dpdk/env_dpdk_rpc.o 00:17:34.309 CC module/blob/bdev/blob_bdev.o 00:17:34.309 CC module/accel/error/accel_error.o 00:17:34.309 CC module/sock/posix/posix.o 00:17:34.309 CC module/fsdev/aio/fsdev_aio.o 00:17:34.309 CC module/keyring/file/keyring.o 00:17:34.309 CC module/scheduler/dynamic/scheduler_dynamic.o 00:17:34.309 CC module/accel/dsa/accel_dsa.o 00:17:34.309 CC module/accel/iaa/accel_iaa.o 00:17:34.309 CC module/accel/ioat/accel_ioat.o 00:17:34.309 LIB libspdk_env_dpdk_rpc.a 00:17:34.309 SO libspdk_env_dpdk_rpc.so.6.0 00:17:34.309 SYMLINK libspdk_env_dpdk_rpc.so 00:17:34.566 CC module/keyring/file/keyring_rpc.o 00:17:34.566 CC module/fsdev/aio/fsdev_aio_rpc.o 00:17:34.566 CC module/accel/ioat/accel_ioat_rpc.o 00:17:34.566 CC module/accel/error/accel_error_rpc.o 00:17:34.566 LIB libspdk_scheduler_dynamic.a 00:17:34.566 LIB libspdk_blob_bdev.a 00:17:34.566 LIB libspdk_keyring_file.a 00:17:34.566 CC module/accel/iaa/accel_iaa_rpc.o 00:17:34.566 CC module/accel/dsa/accel_dsa_rpc.o 00:17:34.566 SO libspdk_blob_bdev.so.12.0 00:17:34.566 SO libspdk_scheduler_dynamic.so.4.0 00:17:34.566 SO libspdk_keyring_file.so.2.0 00:17:34.566 LIB libspdk_accel_ioat.a 00:17:34.566 LIB libspdk_accel_error.a 00:17:34.566 SYMLINK libspdk_blob_bdev.so 00:17:34.566 SYMLINK libspdk_scheduler_dynamic.so 00:17:34.566 CC module/fsdev/aio/linux_aio_mgr.o 00:17:34.566 SO libspdk_accel_ioat.so.6.0 00:17:34.566 SO libspdk_accel_error.so.2.0 00:17:34.566 SYMLINK libspdk_keyring_file.so 00:17:34.566 SYMLINK libspdk_accel_error.so 00:17:34.566 SYMLINK libspdk_accel_ioat.so 00:17:34.566 LIB libspdk_accel_iaa.a 00:17:34.824 LIB libspdk_accel_dsa.a 00:17:34.824 SO libspdk_accel_iaa.so.3.0 00:17:34.824 SO libspdk_accel_dsa.so.5.0 00:17:34.824 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:17:34.824 CC module/keyring/linux/keyring.o 00:17:34.824 CC module/scheduler/gscheduler/gscheduler.o 00:17:34.824 SYMLINK libspdk_accel_iaa.so 00:17:34.824 CC module/keyring/linux/keyring_rpc.o 00:17:34.824 SYMLINK libspdk_accel_dsa.so 00:17:34.824 LIB libspdk_scheduler_dpdk_governor.a 00:17:34.824 LIB libspdk_keyring_linux.a 00:17:34.824 CC module/blobfs/bdev/blobfs_bdev.o 00:17:34.824 CC module/bdev/delay/vbdev_delay.o 00:17:34.824 LIB libspdk_sock_posix.a 00:17:34.824 SO libspdk_scheduler_dpdk_governor.so.4.0 00:17:34.824 LIB libspdk_scheduler_gscheduler.a 00:17:34.824 SO libspdk_keyring_linux.so.1.0 00:17:35.081 CC module/bdev/error/vbdev_error.o 00:17:35.081 CC module/bdev/gpt/gpt.o 00:17:35.081 SO libspdk_scheduler_gscheduler.so.4.0 00:17:35.081 SO libspdk_sock_posix.so.6.0 00:17:35.081 SYMLINK libspdk_scheduler_dpdk_governor.so 00:17:35.081 SYMLINK libspdk_keyring_linux.so 00:17:35.081 CC module/bdev/gpt/vbdev_gpt.o 00:17:35.081 CC module/bdev/delay/vbdev_delay_rpc.o 00:17:35.081 SYMLINK libspdk_scheduler_gscheduler.so 00:17:35.081 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:17:35.081 CC module/bdev/lvol/vbdev_lvol.o 00:17:35.081 LIB libspdk_fsdev_aio.a 00:17:35.081 SYMLINK libspdk_sock_posix.so 00:17:35.081 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:17:35.081 SO libspdk_fsdev_aio.so.1.0 00:17:35.081 SYMLINK libspdk_fsdev_aio.so 00:17:35.081 LIB libspdk_blobfs_bdev.a 00:17:35.338 CC module/bdev/error/vbdev_error_rpc.o 00:17:35.338 SO libspdk_blobfs_bdev.so.6.0 00:17:35.338 CC module/bdev/malloc/bdev_malloc.o 00:17:35.338 LIB libspdk_bdev_gpt.a 00:17:35.338 SO libspdk_bdev_gpt.so.6.0 00:17:35.338 SYMLINK libspdk_blobfs_bdev.so 00:17:35.338 CC module/bdev/malloc/bdev_malloc_rpc.o 00:17:35.338 CC module/bdev/null/bdev_null.o 00:17:35.338 CC module/bdev/nvme/bdev_nvme.o 00:17:35.338 LIB libspdk_bdev_delay.a 00:17:35.338 SYMLINK libspdk_bdev_gpt.so 00:17:35.338 CC module/bdev/passthru/vbdev_passthru.o 00:17:35.338 SO libspdk_bdev_delay.so.6.0 00:17:35.338 LIB libspdk_bdev_error.a 00:17:35.338 SYMLINK libspdk_bdev_delay.so 00:17:35.338 SO libspdk_bdev_error.so.6.0 00:17:35.338 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:17:35.338 CC module/bdev/raid/bdev_raid.o 00:17:35.338 CC module/bdev/raid/bdev_raid_rpc.o 00:17:35.338 SYMLINK libspdk_bdev_error.so 00:17:35.595 LIB libspdk_bdev_malloc.a 00:17:35.595 CC module/bdev/null/bdev_null_rpc.o 00:17:35.595 LIB libspdk_bdev_lvol.a 00:17:35.595 CC module/bdev/split/vbdev_split.o 00:17:35.595 CC module/bdev/raid/bdev_raid_sb.o 00:17:35.595 SO libspdk_bdev_malloc.so.6.0 00:17:35.595 SO libspdk_bdev_lvol.so.6.0 00:17:35.595 CC module/bdev/zone_block/vbdev_zone_block.o 00:17:35.595 LIB libspdk_bdev_passthru.a 00:17:35.595 SO libspdk_bdev_passthru.so.6.0 00:17:35.595 SYMLINK libspdk_bdev_malloc.so 00:17:35.595 CC module/bdev/nvme/bdev_nvme_rpc.o 00:17:35.595 SYMLINK libspdk_bdev_lvol.so 00:17:35.595 CC module/bdev/nvme/nvme_rpc.o 00:17:35.595 CC module/bdev/nvme/bdev_mdns_client.o 00:17:35.595 LIB libspdk_bdev_null.a 00:17:35.595 SYMLINK libspdk_bdev_passthru.so 00:17:35.595 CC module/bdev/nvme/vbdev_opal.o 00:17:35.595 SO libspdk_bdev_null.so.6.0 00:17:35.852 SYMLINK libspdk_bdev_null.so 00:17:35.852 CC module/bdev/split/vbdev_split_rpc.o 00:17:35.852 CC module/bdev/xnvme/bdev_xnvme.o 00:17:35.852 LIB libspdk_bdev_split.a 00:17:35.852 CC module/bdev/aio/bdev_aio.o 00:17:35.852 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:17:35.852 SO libspdk_bdev_split.so.6.0 00:17:35.852 CC module/bdev/ftl/bdev_ftl.o 00:17:36.110 CC module/bdev/iscsi/bdev_iscsi.o 00:17:36.110 SYMLINK libspdk_bdev_split.so 00:17:36.110 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:17:36.110 CC module/bdev/virtio/bdev_virtio_scsi.o 00:17:36.110 LIB libspdk_bdev_zone_block.a 00:17:36.110 SO libspdk_bdev_zone_block.so.6.0 00:17:36.110 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:17:36.110 CC module/bdev/virtio/bdev_virtio_blk.o 00:17:36.110 SYMLINK libspdk_bdev_zone_block.so 00:17:36.110 CC module/bdev/virtio/bdev_virtio_rpc.o 00:17:36.110 CC module/bdev/ftl/bdev_ftl_rpc.o 00:17:36.368 CC module/bdev/aio/bdev_aio_rpc.o 00:17:36.368 CC module/bdev/nvme/vbdev_opal_rpc.o 00:17:36.368 LIB libspdk_bdev_xnvme.a 00:17:36.368 SO libspdk_bdev_xnvme.so.3.0 00:17:36.368 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:17:36.368 LIB libspdk_bdev_iscsi.a 00:17:36.368 SYMLINK libspdk_bdev_xnvme.so 00:17:36.368 CC module/bdev/raid/raid0.o 00:17:36.368 SO libspdk_bdev_iscsi.so.6.0 00:17:36.369 LIB libspdk_bdev_aio.a 00:17:36.369 CC module/bdev/raid/raid1.o 00:17:36.369 LIB libspdk_bdev_ftl.a 00:17:36.369 SYMLINK libspdk_bdev_iscsi.so 00:17:36.369 SO libspdk_bdev_aio.so.6.0 00:17:36.369 CC module/bdev/raid/concat.o 00:17:36.369 SO libspdk_bdev_ftl.so.6.0 00:17:36.369 LIB libspdk_bdev_virtio.a 00:17:36.369 SO libspdk_bdev_virtio.so.6.0 00:17:36.369 SYMLINK libspdk_bdev_aio.so 00:17:36.626 SYMLINK libspdk_bdev_ftl.so 00:17:36.626 SYMLINK libspdk_bdev_virtio.so 00:17:36.626 LIB libspdk_bdev_raid.a 00:17:36.882 SO libspdk_bdev_raid.so.6.0 00:17:36.882 SYMLINK libspdk_bdev_raid.so 00:17:37.448 LIB libspdk_bdev_nvme.a 00:17:37.448 SO libspdk_bdev_nvme.so.7.1 00:17:37.705 SYMLINK libspdk_bdev_nvme.so 00:17:37.963 CC module/event/subsystems/iobuf/iobuf.o 00:17:37.963 CC module/event/subsystems/sock/sock.o 00:17:37.963 CC module/event/subsystems/scheduler/scheduler.o 00:17:37.963 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:17:37.963 CC module/event/subsystems/fsdev/fsdev.o 00:17:37.963 CC module/event/subsystems/vmd/vmd.o 00:17:37.963 CC module/event/subsystems/vmd/vmd_rpc.o 00:17:37.963 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:17:37.963 CC module/event/subsystems/keyring/keyring.o 00:17:38.221 LIB libspdk_event_scheduler.a 00:17:38.221 LIB libspdk_event_fsdev.a 00:17:38.221 LIB libspdk_event_keyring.a 00:17:38.221 LIB libspdk_event_vhost_blk.a 00:17:38.221 LIB libspdk_event_iobuf.a 00:17:38.221 LIB libspdk_event_sock.a 00:17:38.221 SO libspdk_event_scheduler.so.4.0 00:17:38.221 LIB libspdk_event_vmd.a 00:17:38.221 SO libspdk_event_fsdev.so.1.0 00:17:38.221 SO libspdk_event_vhost_blk.so.3.0 00:17:38.221 SO libspdk_event_keyring.so.1.0 00:17:38.221 SO libspdk_event_sock.so.5.0 00:17:38.221 SO libspdk_event_iobuf.so.3.0 00:17:38.221 SO libspdk_event_vmd.so.6.0 00:17:38.221 SYMLINK libspdk_event_scheduler.so 00:17:38.221 SYMLINK libspdk_event_fsdev.so 00:17:38.221 SYMLINK libspdk_event_vhost_blk.so 00:17:38.221 SYMLINK libspdk_event_keyring.so 00:17:38.221 SYMLINK libspdk_event_sock.so 00:17:38.221 SYMLINK libspdk_event_iobuf.so 00:17:38.221 SYMLINK libspdk_event_vmd.so 00:17:38.481 CC module/event/subsystems/accel/accel.o 00:17:38.738 LIB libspdk_event_accel.a 00:17:38.738 SO libspdk_event_accel.so.6.0 00:17:38.739 SYMLINK libspdk_event_accel.so 00:17:38.996 CC module/event/subsystems/bdev/bdev.o 00:17:38.996 LIB libspdk_event_bdev.a 00:17:39.254 SO libspdk_event_bdev.so.6.0 00:17:39.254 SYMLINK libspdk_event_bdev.so 00:17:39.254 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:17:39.254 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:17:39.254 CC module/event/subsystems/nbd/nbd.o 00:17:39.254 CC module/event/subsystems/scsi/scsi.o 00:17:39.254 CC module/event/subsystems/ublk/ublk.o 00:17:39.511 LIB libspdk_event_ublk.a 00:17:39.511 LIB libspdk_event_nbd.a 00:17:39.511 LIB libspdk_event_scsi.a 00:17:39.511 SO libspdk_event_ublk.so.3.0 00:17:39.511 SO libspdk_event_nbd.so.6.0 00:17:39.511 SO libspdk_event_scsi.so.6.0 00:17:39.511 SYMLINK libspdk_event_ublk.so 00:17:39.511 SYMLINK libspdk_event_nbd.so 00:17:39.511 SYMLINK libspdk_event_scsi.so 00:17:39.511 LIB libspdk_event_nvmf.a 00:17:39.511 SO libspdk_event_nvmf.so.6.0 00:17:39.511 SYMLINK libspdk_event_nvmf.so 00:17:39.769 CC module/event/subsystems/iscsi/iscsi.o 00:17:39.769 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:17:39.769 LIB libspdk_event_vhost_scsi.a 00:17:39.769 LIB libspdk_event_iscsi.a 00:17:40.027 SO libspdk_event_vhost_scsi.so.3.0 00:17:40.027 SO libspdk_event_iscsi.so.6.0 00:17:40.027 SYMLINK libspdk_event_iscsi.so 00:17:40.027 SYMLINK libspdk_event_vhost_scsi.so 00:17:40.027 SO libspdk.so.6.0 00:17:40.027 SYMLINK libspdk.so 00:17:40.284 TEST_HEADER include/spdk/accel.h 00:17:40.284 TEST_HEADER include/spdk/accel_module.h 00:17:40.284 TEST_HEADER include/spdk/assert.h 00:17:40.284 TEST_HEADER include/spdk/barrier.h 00:17:40.284 TEST_HEADER include/spdk/base64.h 00:17:40.284 CC test/rpc_client/rpc_client_test.o 00:17:40.284 TEST_HEADER include/spdk/bdev.h 00:17:40.284 TEST_HEADER include/spdk/bdev_module.h 00:17:40.284 CXX app/trace/trace.o 00:17:40.284 TEST_HEADER include/spdk/bdev_zone.h 00:17:40.284 TEST_HEADER include/spdk/bit_array.h 00:17:40.284 CC examples/interrupt_tgt/interrupt_tgt.o 00:17:40.284 TEST_HEADER include/spdk/bit_pool.h 00:17:40.284 TEST_HEADER include/spdk/blob_bdev.h 00:17:40.284 TEST_HEADER include/spdk/blobfs_bdev.h 00:17:40.284 TEST_HEADER include/spdk/blobfs.h 00:17:40.284 TEST_HEADER include/spdk/blob.h 00:17:40.284 TEST_HEADER include/spdk/conf.h 00:17:40.284 TEST_HEADER include/spdk/config.h 00:17:40.284 TEST_HEADER include/spdk/cpuset.h 00:17:40.284 TEST_HEADER include/spdk/crc16.h 00:17:40.284 TEST_HEADER include/spdk/crc32.h 00:17:40.284 TEST_HEADER include/spdk/crc64.h 00:17:40.284 TEST_HEADER include/spdk/dif.h 00:17:40.284 TEST_HEADER include/spdk/dma.h 00:17:40.284 TEST_HEADER include/spdk/endian.h 00:17:40.284 TEST_HEADER include/spdk/env_dpdk.h 00:17:40.284 TEST_HEADER include/spdk/env.h 00:17:40.284 TEST_HEADER include/spdk/event.h 00:17:40.284 CC test/thread/poller_perf/poller_perf.o 00:17:40.284 TEST_HEADER include/spdk/fd_group.h 00:17:40.284 TEST_HEADER include/spdk/fd.h 00:17:40.284 TEST_HEADER include/spdk/file.h 00:17:40.284 CC examples/util/zipf/zipf.o 00:17:40.284 TEST_HEADER include/spdk/fsdev.h 00:17:40.284 TEST_HEADER include/spdk/fsdev_module.h 00:17:40.284 TEST_HEADER include/spdk/ftl.h 00:17:40.284 CC examples/ioat/perf/perf.o 00:17:40.284 TEST_HEADER include/spdk/gpt_spec.h 00:17:40.284 TEST_HEADER include/spdk/hexlify.h 00:17:40.284 TEST_HEADER include/spdk/histogram_data.h 00:17:40.284 TEST_HEADER include/spdk/idxd.h 00:17:40.284 TEST_HEADER include/spdk/idxd_spec.h 00:17:40.284 TEST_HEADER include/spdk/init.h 00:17:40.284 TEST_HEADER include/spdk/ioat.h 00:17:40.284 TEST_HEADER include/spdk/ioat_spec.h 00:17:40.284 TEST_HEADER include/spdk/iscsi_spec.h 00:17:40.284 TEST_HEADER include/spdk/json.h 00:17:40.284 TEST_HEADER include/spdk/jsonrpc.h 00:17:40.284 TEST_HEADER include/spdk/keyring.h 00:17:40.284 TEST_HEADER include/spdk/keyring_module.h 00:17:40.284 TEST_HEADER include/spdk/likely.h 00:17:40.284 TEST_HEADER include/spdk/log.h 00:17:40.284 TEST_HEADER include/spdk/lvol.h 00:17:40.284 TEST_HEADER include/spdk/md5.h 00:17:40.284 TEST_HEADER include/spdk/memory.h 00:17:40.284 TEST_HEADER include/spdk/mmio.h 00:17:40.284 TEST_HEADER include/spdk/nbd.h 00:17:40.284 TEST_HEADER include/spdk/net.h 00:17:40.284 TEST_HEADER include/spdk/notify.h 00:17:40.284 TEST_HEADER include/spdk/nvme.h 00:17:40.284 TEST_HEADER include/spdk/nvme_intel.h 00:17:40.284 TEST_HEADER include/spdk/nvme_ocssd.h 00:17:40.284 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:17:40.284 CC test/dma/test_dma/test_dma.o 00:17:40.284 TEST_HEADER include/spdk/nvme_spec.h 00:17:40.284 TEST_HEADER include/spdk/nvme_zns.h 00:17:40.284 TEST_HEADER include/spdk/nvmf_cmd.h 00:17:40.284 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:17:40.284 TEST_HEADER include/spdk/nvmf.h 00:17:40.284 TEST_HEADER include/spdk/nvmf_spec.h 00:17:40.284 TEST_HEADER include/spdk/nvmf_transport.h 00:17:40.284 TEST_HEADER include/spdk/opal.h 00:17:40.284 TEST_HEADER include/spdk/opal_spec.h 00:17:40.284 CC test/app/bdev_svc/bdev_svc.o 00:17:40.284 TEST_HEADER include/spdk/pci_ids.h 00:17:40.284 TEST_HEADER include/spdk/pipe.h 00:17:40.284 TEST_HEADER include/spdk/queue.h 00:17:40.284 TEST_HEADER include/spdk/reduce.h 00:17:40.284 TEST_HEADER include/spdk/rpc.h 00:17:40.284 TEST_HEADER include/spdk/scheduler.h 00:17:40.284 TEST_HEADER include/spdk/scsi.h 00:17:40.542 TEST_HEADER include/spdk/scsi_spec.h 00:17:40.542 TEST_HEADER include/spdk/sock.h 00:17:40.542 TEST_HEADER include/spdk/stdinc.h 00:17:40.542 TEST_HEADER include/spdk/string.h 00:17:40.542 TEST_HEADER include/spdk/thread.h 00:17:40.542 TEST_HEADER include/spdk/trace.h 00:17:40.542 TEST_HEADER include/spdk/trace_parser.h 00:17:40.542 TEST_HEADER include/spdk/tree.h 00:17:40.542 TEST_HEADER include/spdk/ublk.h 00:17:40.542 TEST_HEADER include/spdk/util.h 00:17:40.542 TEST_HEADER include/spdk/uuid.h 00:17:40.542 TEST_HEADER include/spdk/version.h 00:17:40.542 TEST_HEADER include/spdk/vfio_user_pci.h 00:17:40.542 TEST_HEADER include/spdk/vfio_user_spec.h 00:17:40.542 TEST_HEADER include/spdk/vhost.h 00:17:40.542 TEST_HEADER include/spdk/vmd.h 00:17:40.542 CC test/env/mem_callbacks/mem_callbacks.o 00:17:40.542 TEST_HEADER include/spdk/xor.h 00:17:40.542 TEST_HEADER include/spdk/zipf.h 00:17:40.542 CXX test/cpp_headers/accel.o 00:17:40.542 LINK rpc_client_test 00:17:40.542 LINK poller_perf 00:17:40.542 LINK zipf 00:17:40.542 LINK interrupt_tgt 00:17:40.542 LINK bdev_svc 00:17:40.542 LINK ioat_perf 00:17:40.542 CXX test/cpp_headers/accel_module.o 00:17:40.542 LINK spdk_trace 00:17:40.799 CC test/env/vtophys/vtophys.o 00:17:40.799 CC test/app/histogram_perf/histogram_perf.o 00:17:40.799 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:17:40.799 CXX test/cpp_headers/assert.o 00:17:40.799 CXX test/cpp_headers/barrier.o 00:17:40.799 CC examples/ioat/verify/verify.o 00:17:40.799 CC examples/thread/thread/thread_ex.o 00:17:40.799 LINK histogram_perf 00:17:40.799 LINK vtophys 00:17:40.799 CC app/trace_record/trace_record.o 00:17:40.799 LINK test_dma 00:17:40.799 CXX test/cpp_headers/base64.o 00:17:41.057 LINK mem_callbacks 00:17:41.058 LINK thread 00:17:41.058 CC app/nvmf_tgt/nvmf_main.o 00:17:41.058 LINK verify 00:17:41.058 CXX test/cpp_headers/bdev.o 00:17:41.058 CC app/iscsi_tgt/iscsi_tgt.o 00:17:41.058 CC test/event/event_perf/event_perf.o 00:17:41.058 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:17:41.058 LINK spdk_trace_record 00:17:41.058 CC test/event/reactor/reactor.o 00:17:41.058 LINK nvmf_tgt 00:17:41.058 LINK nvme_fuzz 00:17:41.058 CXX test/cpp_headers/bdev_module.o 00:17:41.315 CXX test/cpp_headers/bdev_zone.o 00:17:41.315 LINK event_perf 00:17:41.315 CC app/spdk_tgt/spdk_tgt.o 00:17:41.315 LINK iscsi_tgt 00:17:41.315 LINK env_dpdk_post_init 00:17:41.315 LINK reactor 00:17:41.315 CC examples/sock/hello_world/hello_sock.o 00:17:41.315 CXX test/cpp_headers/bit_array.o 00:17:41.315 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:17:41.315 CXX test/cpp_headers/bit_pool.o 00:17:41.315 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:17:41.315 CC test/env/memory/memory_ut.o 00:17:41.315 CC test/event/reactor_perf/reactor_perf.o 00:17:41.315 CC app/spdk_lspci/spdk_lspci.o 00:17:41.315 LINK spdk_tgt 00:17:41.573 CC app/spdk_nvme_perf/perf.o 00:17:41.573 CC app/spdk_nvme_identify/identify.o 00:17:41.573 LINK hello_sock 00:17:41.573 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:17:41.573 LINK spdk_lspci 00:17:41.573 CXX test/cpp_headers/blob_bdev.o 00:17:41.573 LINK reactor_perf 00:17:41.573 CXX test/cpp_headers/blobfs_bdev.o 00:17:41.830 CC test/event/app_repeat/app_repeat.o 00:17:41.830 CC app/spdk_nvme_discover/discovery_aer.o 00:17:41.830 CC test/env/pci/pci_ut.o 00:17:41.830 CC examples/vmd/lsvmd/lsvmd.o 00:17:41.830 LINK app_repeat 00:17:41.830 CXX test/cpp_headers/blobfs.o 00:17:41.830 LINK lsvmd 00:17:41.830 LINK spdk_nvme_discover 00:17:42.088 LINK vhost_fuzz 00:17:42.088 CXX test/cpp_headers/blob.o 00:17:42.088 CXX test/cpp_headers/conf.o 00:17:42.088 CC test/event/scheduler/scheduler.o 00:17:42.088 CC examples/vmd/led/led.o 00:17:42.088 CC app/spdk_top/spdk_top.o 00:17:42.088 LINK spdk_nvme_identify 00:17:42.088 CXX test/cpp_headers/config.o 00:17:42.345 CXX test/cpp_headers/cpuset.o 00:17:42.345 LINK pci_ut 00:17:42.345 LINK led 00:17:42.345 LINK spdk_nvme_perf 00:17:42.345 CC test/accel/dif/dif.o 00:17:42.345 CXX test/cpp_headers/crc16.o 00:17:42.345 LINK scheduler 00:17:42.345 CXX test/cpp_headers/crc32.o 00:17:42.345 CXX test/cpp_headers/crc64.o 00:17:42.345 CXX test/cpp_headers/dif.o 00:17:42.603 CC app/vhost/vhost.o 00:17:42.603 CC app/spdk_dd/spdk_dd.o 00:17:42.603 CC examples/idxd/perf/perf.o 00:17:42.603 LINK memory_ut 00:17:42.603 CXX test/cpp_headers/dma.o 00:17:42.603 LINK vhost 00:17:42.603 CXX test/cpp_headers/endian.o 00:17:42.861 CC test/blobfs/mkfs/mkfs.o 00:17:42.861 CC test/lvol/esnap/esnap.o 00:17:42.861 CXX test/cpp_headers/env_dpdk.o 00:17:42.861 LINK spdk_dd 00:17:42.861 CC test/app/jsoncat/jsoncat.o 00:17:42.861 LINK idxd_perf 00:17:42.861 CC test/nvme/aer/aer.o 00:17:42.861 CXX test/cpp_headers/env.o 00:17:42.861 LINK mkfs 00:17:42.861 LINK iscsi_fuzz 00:17:43.175 LINK dif 00:17:43.175 LINK jsoncat 00:17:43.175 LINK spdk_top 00:17:43.175 CXX test/cpp_headers/event.o 00:17:43.175 CXX test/cpp_headers/fd_group.o 00:17:43.175 CC app/fio/nvme/fio_plugin.o 00:17:43.175 CXX test/cpp_headers/fd.o 00:17:43.175 CC examples/fsdev/hello_world/hello_fsdev.o 00:17:43.175 LINK aer 00:17:43.175 CC test/app/stub/stub.o 00:17:43.175 CC app/fio/bdev/fio_plugin.o 00:17:43.443 CC test/nvme/reset/reset.o 00:17:43.443 CXX test/cpp_headers/file.o 00:17:43.443 CC test/nvme/sgl/sgl.o 00:17:43.443 CC examples/accel/perf/accel_perf.o 00:17:43.443 CC test/nvme/e2edp/nvme_dp.o 00:17:43.443 LINK stub 00:17:43.443 CXX test/cpp_headers/fsdev.o 00:17:43.443 LINK hello_fsdev 00:17:43.443 LINK reset 00:17:43.701 CXX test/cpp_headers/fsdev_module.o 00:17:43.701 LINK spdk_bdev 00:17:43.701 CXX test/cpp_headers/ftl.o 00:17:43.701 LINK sgl 00:17:43.701 LINK spdk_nvme 00:17:43.701 LINK nvme_dp 00:17:43.701 CXX test/cpp_headers/gpt_spec.o 00:17:43.701 CC test/bdev/bdevio/bdevio.o 00:17:43.701 CC test/nvme/overhead/overhead.o 00:17:43.701 CXX test/cpp_headers/hexlify.o 00:17:43.701 CC test/nvme/err_injection/err_injection.o 00:17:43.701 CC test/nvme/startup/startup.o 00:17:43.701 LINK accel_perf 00:17:43.959 CC test/nvme/reserve/reserve.o 00:17:43.959 CC test/nvme/simple_copy/simple_copy.o 00:17:43.959 CC test/nvme/connect_stress/connect_stress.o 00:17:43.959 LINK err_injection 00:17:43.959 CXX test/cpp_headers/histogram_data.o 00:17:43.959 LINK startup 00:17:43.959 LINK bdevio 00:17:43.959 LINK overhead 00:17:43.959 LINK connect_stress 00:17:43.959 LINK reserve 00:17:44.217 CXX test/cpp_headers/idxd.o 00:17:44.217 LINK simple_copy 00:17:44.217 CXX test/cpp_headers/idxd_spec.o 00:17:44.217 CXX test/cpp_headers/init.o 00:17:44.217 CXX test/cpp_headers/ioat.o 00:17:44.217 CC examples/blob/hello_world/hello_blob.o 00:17:44.217 CC examples/blob/cli/blobcli.o 00:17:44.217 CXX test/cpp_headers/ioat_spec.o 00:17:44.217 CXX test/cpp_headers/iscsi_spec.o 00:17:44.217 CXX test/cpp_headers/json.o 00:17:44.217 CXX test/cpp_headers/jsonrpc.o 00:17:44.217 CC test/nvme/boot_partition/boot_partition.o 00:17:44.217 CC test/nvme/compliance/nvme_compliance.o 00:17:44.217 CC test/nvme/fused_ordering/fused_ordering.o 00:17:44.217 CXX test/cpp_headers/keyring.o 00:17:44.217 CXX test/cpp_headers/keyring_module.o 00:17:44.475 LINK hello_blob 00:17:44.475 CXX test/cpp_headers/likely.o 00:17:44.475 LINK boot_partition 00:17:44.475 CC test/nvme/doorbell_aers/doorbell_aers.o 00:17:44.475 CXX test/cpp_headers/log.o 00:17:44.475 LINK fused_ordering 00:17:44.475 CC test/nvme/fdp/fdp.o 00:17:44.475 CC test/nvme/cuse/cuse.o 00:17:44.733 LINK nvme_compliance 00:17:44.733 LINK doorbell_aers 00:17:44.733 LINK blobcli 00:17:44.733 CC examples/nvme/reconnect/reconnect.o 00:17:44.733 CC examples/nvme/hello_world/hello_world.o 00:17:44.733 CXX test/cpp_headers/lvol.o 00:17:44.733 CXX test/cpp_headers/md5.o 00:17:44.733 CXX test/cpp_headers/memory.o 00:17:44.733 CC examples/nvme/nvme_manage/nvme_manage.o 00:17:44.733 CXX test/cpp_headers/mmio.o 00:17:44.733 CXX test/cpp_headers/nbd.o 00:17:44.733 CXX test/cpp_headers/net.o 00:17:44.991 LINK fdp 00:17:44.991 CXX test/cpp_headers/notify.o 00:17:44.991 LINK hello_world 00:17:44.991 CXX test/cpp_headers/nvme.o 00:17:44.991 CXX test/cpp_headers/nvme_intel.o 00:17:44.991 CXX test/cpp_headers/nvme_ocssd.o 00:17:44.991 LINK reconnect 00:17:44.991 CXX test/cpp_headers/nvme_ocssd_spec.o 00:17:44.991 CXX test/cpp_headers/nvme_spec.o 00:17:44.991 CXX test/cpp_headers/nvme_zns.o 00:17:44.991 CC examples/nvme/arbitration/arbitration.o 00:17:45.248 CXX test/cpp_headers/nvmf_cmd.o 00:17:45.248 CXX test/cpp_headers/nvmf_fc_spec.o 00:17:45.248 CC examples/bdev/hello_world/hello_bdev.o 00:17:45.248 CC examples/nvme/hotplug/hotplug.o 00:17:45.248 CXX test/cpp_headers/nvmf.o 00:17:45.248 LINK nvme_manage 00:17:45.248 CC examples/nvme/cmb_copy/cmb_copy.o 00:17:45.248 CXX test/cpp_headers/nvmf_spec.o 00:17:45.248 LINK arbitration 00:17:45.248 CXX test/cpp_headers/nvmf_transport.o 00:17:45.506 CXX test/cpp_headers/opal.o 00:17:45.506 LINK hello_bdev 00:17:45.506 LINK cmb_copy 00:17:45.506 LINK hotplug 00:17:45.506 CXX test/cpp_headers/opal_spec.o 00:17:45.506 CC examples/nvme/abort/abort.o 00:17:45.506 CXX test/cpp_headers/pci_ids.o 00:17:45.506 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:17:45.506 CXX test/cpp_headers/pipe.o 00:17:45.506 CXX test/cpp_headers/queue.o 00:17:45.506 CXX test/cpp_headers/reduce.o 00:17:45.506 CXX test/cpp_headers/rpc.o 00:17:45.506 CXX test/cpp_headers/scheduler.o 00:17:45.764 CXX test/cpp_headers/scsi.o 00:17:45.764 LINK pmr_persistence 00:17:45.764 CXX test/cpp_headers/scsi_spec.o 00:17:45.764 CC examples/bdev/bdevperf/bdevperf.o 00:17:45.764 CXX test/cpp_headers/sock.o 00:17:45.764 CXX test/cpp_headers/stdinc.o 00:17:45.764 CXX test/cpp_headers/string.o 00:17:45.764 CXX test/cpp_headers/thread.o 00:17:45.764 CXX test/cpp_headers/trace.o 00:17:45.764 LINK cuse 00:17:45.764 CXX test/cpp_headers/trace_parser.o 00:17:45.764 LINK abort 00:17:45.764 CXX test/cpp_headers/tree.o 00:17:45.764 CXX test/cpp_headers/ublk.o 00:17:46.022 CXX test/cpp_headers/util.o 00:17:46.022 CXX test/cpp_headers/uuid.o 00:17:46.022 CXX test/cpp_headers/version.o 00:17:46.022 CXX test/cpp_headers/vfio_user_pci.o 00:17:46.022 CXX test/cpp_headers/vfio_user_spec.o 00:17:46.022 CXX test/cpp_headers/vhost.o 00:17:46.022 CXX test/cpp_headers/vmd.o 00:17:46.022 CXX test/cpp_headers/xor.o 00:17:46.022 CXX test/cpp_headers/zipf.o 00:17:46.587 LINK bdevperf 00:17:47.154 CC examples/nvmf/nvmf/nvmf.o 00:17:47.413 LINK nvmf 00:17:47.671 LINK esnap 00:17:48.238 00:17:48.238 real 1m8.591s 00:17:48.238 user 6m25.242s 00:17:48.238 sys 1m8.700s 00:17:48.238 22:59:26 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:17:48.238 22:59:26 make -- common/autotest_common.sh@10 -- $ set +x 00:17:48.238 ************************************ 00:17:48.238 END TEST make 00:17:48.238 ************************************ 00:17:48.238 22:59:26 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:17:48.238 22:59:26 -- pm/common@29 -- $ signal_monitor_resources TERM 00:17:48.238 22:59:26 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:17:48.238 22:59:26 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:17:48.238 22:59:26 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:17:48.238 22:59:26 -- pm/common@44 -- $ pid=5067 00:17:48.238 22:59:26 -- pm/common@50 -- $ kill -TERM 5067 00:17:48.238 22:59:26 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:17:48.238 22:59:26 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:17:48.238 22:59:26 -- pm/common@44 -- $ pid=5068 00:17:48.238 22:59:26 -- pm/common@50 -- $ kill -TERM 5068 00:17:48.238 22:59:26 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:17:48.238 22:59:26 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:17:48.238 22:59:26 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:48.238 22:59:26 -- common/autotest_common.sh@1711 -- # lcov --version 00:17:48.238 22:59:26 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:48.238 22:59:26 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:48.238 22:59:26 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:48.238 22:59:26 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:48.238 22:59:26 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:48.238 22:59:26 -- scripts/common.sh@336 -- # IFS=.-: 00:17:48.238 22:59:26 -- scripts/common.sh@336 -- # read -ra ver1 00:17:48.238 22:59:26 -- scripts/common.sh@337 -- # IFS=.-: 00:17:48.238 22:59:26 -- scripts/common.sh@337 -- # read -ra ver2 00:17:48.238 22:59:26 -- scripts/common.sh@338 -- # local 'op=<' 00:17:48.238 22:59:26 -- scripts/common.sh@340 -- # ver1_l=2 00:17:48.238 22:59:26 -- scripts/common.sh@341 -- # ver2_l=1 00:17:48.238 22:59:26 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:48.238 22:59:26 -- scripts/common.sh@344 -- # case "$op" in 00:17:48.238 22:59:26 -- scripts/common.sh@345 -- # : 1 00:17:48.238 22:59:26 -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:48.238 22:59:26 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:48.238 22:59:26 -- scripts/common.sh@365 -- # decimal 1 00:17:48.238 22:59:26 -- scripts/common.sh@353 -- # local d=1 00:17:48.238 22:59:26 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:48.238 22:59:26 -- scripts/common.sh@355 -- # echo 1 00:17:48.238 22:59:26 -- scripts/common.sh@365 -- # ver1[v]=1 00:17:48.238 22:59:26 -- scripts/common.sh@366 -- # decimal 2 00:17:48.238 22:59:26 -- scripts/common.sh@353 -- # local d=2 00:17:48.238 22:59:26 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:48.238 22:59:26 -- scripts/common.sh@355 -- # echo 2 00:17:48.238 22:59:26 -- scripts/common.sh@366 -- # ver2[v]=2 00:17:48.238 22:59:26 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:48.238 22:59:26 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:48.238 22:59:26 -- scripts/common.sh@368 -- # return 0 00:17:48.238 22:59:26 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:48.238 22:59:26 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:48.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:48.238 --rc genhtml_branch_coverage=1 00:17:48.238 --rc genhtml_function_coverage=1 00:17:48.238 --rc genhtml_legend=1 00:17:48.238 --rc geninfo_all_blocks=1 00:17:48.238 --rc geninfo_unexecuted_blocks=1 00:17:48.238 00:17:48.238 ' 00:17:48.238 22:59:26 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:48.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:48.238 --rc genhtml_branch_coverage=1 00:17:48.238 --rc genhtml_function_coverage=1 00:17:48.238 --rc genhtml_legend=1 00:17:48.238 --rc geninfo_all_blocks=1 00:17:48.238 --rc geninfo_unexecuted_blocks=1 00:17:48.238 00:17:48.238 ' 00:17:48.238 22:59:26 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:48.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:48.238 --rc genhtml_branch_coverage=1 00:17:48.238 --rc genhtml_function_coverage=1 00:17:48.238 --rc genhtml_legend=1 00:17:48.238 --rc geninfo_all_blocks=1 00:17:48.238 --rc geninfo_unexecuted_blocks=1 00:17:48.238 00:17:48.238 ' 00:17:48.238 22:59:26 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:48.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:48.238 --rc genhtml_branch_coverage=1 00:17:48.238 --rc genhtml_function_coverage=1 00:17:48.238 --rc genhtml_legend=1 00:17:48.238 --rc geninfo_all_blocks=1 00:17:48.238 --rc geninfo_unexecuted_blocks=1 00:17:48.238 00:17:48.238 ' 00:17:48.238 22:59:26 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:48.238 22:59:26 -- nvmf/common.sh@7 -- # uname -s 00:17:48.238 22:59:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:48.238 22:59:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:48.238 22:59:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:48.238 22:59:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:48.238 22:59:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:48.238 22:59:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:48.238 22:59:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:48.238 22:59:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:48.238 22:59:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:48.239 22:59:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:48.239 22:59:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:aa33a105-beb8-4410-b0bb-bb954c91bba9 00:17:48.239 22:59:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=aa33a105-beb8-4410-b0bb-bb954c91bba9 00:17:48.239 22:59:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:48.239 22:59:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:48.239 22:59:26 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:17:48.239 22:59:26 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:48.239 22:59:26 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:48.239 22:59:26 -- scripts/common.sh@15 -- # shopt -s extglob 00:17:48.239 22:59:26 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:48.239 22:59:26 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:48.239 22:59:26 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:48.239 22:59:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:48.239 22:59:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:48.239 22:59:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:48.239 22:59:26 -- paths/export.sh@5 -- # export PATH 00:17:48.239 22:59:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:48.239 22:59:26 -- nvmf/common.sh@51 -- # : 0 00:17:48.239 22:59:26 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:48.239 22:59:26 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:48.239 22:59:26 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:48.239 22:59:26 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:48.239 22:59:26 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:48.239 22:59:26 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:48.239 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:48.239 22:59:26 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:48.239 22:59:26 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:48.239 22:59:26 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:48.239 22:59:26 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:17:48.239 22:59:26 -- spdk/autotest.sh@32 -- # uname -s 00:17:48.239 22:59:26 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:17:48.239 22:59:26 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:17:48.239 22:59:26 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:17:48.239 22:59:26 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:17:48.239 22:59:26 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:17:48.239 22:59:26 -- spdk/autotest.sh@44 -- # modprobe nbd 00:17:48.497 22:59:26 -- spdk/autotest.sh@46 -- # type -P udevadm 00:17:48.497 22:59:26 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:17:48.497 22:59:26 -- spdk/autotest.sh@48 -- # udevadm_pid=54265 00:17:48.497 22:59:26 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:17:48.497 22:59:26 -- pm/common@17 -- # local monitor 00:17:48.497 22:59:26 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:17:48.497 22:59:26 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:17:48.497 22:59:26 -- pm/common@25 -- # sleep 1 00:17:48.497 22:59:26 -- pm/common@21 -- # date +%s 00:17:48.497 22:59:26 -- pm/common@21 -- # date +%s 00:17:48.497 22:59:26 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:17:48.497 22:59:26 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733785166 00:17:48.497 22:59:26 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733785166 00:17:48.497 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733785166_collect-cpu-load.pm.log 00:17:48.497 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733785166_collect-vmstat.pm.log 00:17:49.429 22:59:27 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:17:49.429 22:59:27 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:17:49.429 22:59:27 -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:49.429 22:59:27 -- common/autotest_common.sh@10 -- # set +x 00:17:49.429 22:59:27 -- spdk/autotest.sh@59 -- # create_test_list 00:17:49.429 22:59:27 -- common/autotest_common.sh@752 -- # xtrace_disable 00:17:49.429 22:59:27 -- common/autotest_common.sh@10 -- # set +x 00:17:49.429 22:59:27 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:17:49.429 22:59:27 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:17:49.429 22:59:27 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:17:49.429 22:59:27 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:17:49.429 22:59:27 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:17:49.429 22:59:27 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:17:49.429 22:59:27 -- common/autotest_common.sh@1457 -- # uname 00:17:49.429 22:59:27 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:17:49.429 22:59:27 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:17:49.429 22:59:27 -- common/autotest_common.sh@1477 -- # uname 00:17:49.429 22:59:27 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:17:49.429 22:59:27 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:17:49.429 22:59:27 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:17:49.429 lcov: LCOV version 1.15 00:17:49.429 22:59:27 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:18:04.336 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:18:04.336 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:18:19.296 22:59:55 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:18:19.296 22:59:55 -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:19.296 22:59:55 -- common/autotest_common.sh@10 -- # set +x 00:18:19.296 22:59:55 -- spdk/autotest.sh@78 -- # rm -f 00:18:19.296 22:59:55 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:18:19.296 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:19.296 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:18:19.296 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:18:19.296 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:18:19.296 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:18:19.296 22:59:56 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:18:19.296 22:59:56 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:18:19.296 22:59:56 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:18:19.296 22:59:56 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:18:19.296 22:59:56 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:18:19.297 22:59:56 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:18:19.297 22:59:56 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:18:19.297 22:59:56 -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:18:19.297 22:59:56 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:18:19.297 22:59:56 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:18:19.297 22:59:56 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:18:19.297 22:59:56 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:18:19.297 22:59:56 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:18:19.297 22:59:56 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:18:19.297 22:59:56 -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:18:19.297 22:59:56 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:18:19.297 22:59:56 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:18:19.297 22:59:56 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:18:19.297 22:59:56 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:18:19.297 22:59:56 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:18:19.297 22:59:56 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:18:19.297 22:59:56 -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0 00:18:19.297 22:59:56 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:18:19.297 22:59:56 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n1 00:18:19.297 22:59:56 -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:18:19.297 22:59:56 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:18:19.297 22:59:56 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:18:19.297 22:59:56 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:18:19.297 22:59:56 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n2 00:18:19.297 22:59:56 -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:18:19.297 22:59:56 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:18:19.297 22:59:56 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:18:19.297 22:59:56 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:18:19.297 22:59:56 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n3 00:18:19.297 22:59:56 -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:18:19.297 22:59:56 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:18:19.297 22:59:56 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:18:19.297 22:59:56 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:18:19.297 22:59:56 -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0 00:18:19.297 22:59:56 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:18:19.297 22:59:56 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3c3n1 00:18:19.297 22:59:56 -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:18:19.297 22:59:56 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:18:19.297 22:59:56 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:18:19.297 22:59:56 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:18:19.297 22:59:56 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:18:19.297 22:59:56 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:18:19.297 22:59:56 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:18:19.297 22:59:56 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:18:19.297 22:59:56 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:18:19.297 No valid GPT data, bailing 00:18:19.297 22:59:56 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:18:19.297 22:59:56 -- scripts/common.sh@394 -- # pt= 00:18:19.297 22:59:56 -- scripts/common.sh@395 -- # return 1 00:18:19.297 22:59:56 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:18:19.297 1+0 records in 00:18:19.297 1+0 records out 00:18:19.297 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0100078 s, 105 MB/s 00:18:19.297 22:59:56 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:18:19.297 22:59:56 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:18:19.297 22:59:56 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:18:19.297 22:59:56 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:18:19.297 22:59:56 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:18:19.297 No valid GPT data, bailing 00:18:19.297 22:59:56 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:18:19.297 22:59:56 -- scripts/common.sh@394 -- # pt= 00:18:19.297 22:59:56 -- scripts/common.sh@395 -- # return 1 00:18:19.297 22:59:56 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:18:19.297 1+0 records in 00:18:19.297 1+0 records out 00:18:19.297 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00400262 s, 262 MB/s 00:18:19.297 22:59:56 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:18:19.297 22:59:56 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:18:19.297 22:59:56 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:18:19.297 22:59:56 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:18:19.297 22:59:56 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:18:19.297 No valid GPT data, bailing 00:18:19.297 22:59:56 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:18:19.297 22:59:56 -- scripts/common.sh@394 -- # pt= 00:18:19.297 22:59:56 -- scripts/common.sh@395 -- # return 1 00:18:19.297 22:59:56 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:18:19.297 1+0 records in 00:18:19.297 1+0 records out 00:18:19.297 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00403569 s, 260 MB/s 00:18:19.297 22:59:56 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:18:19.297 22:59:56 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:18:19.297 22:59:56 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n2 00:18:19.297 22:59:56 -- scripts/common.sh@381 -- # local block=/dev/nvme2n2 pt 00:18:19.297 22:59:56 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:18:19.297 No valid GPT data, bailing 00:18:19.297 22:59:56 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:18:19.297 22:59:56 -- scripts/common.sh@394 -- # pt= 00:18:19.297 22:59:56 -- scripts/common.sh@395 -- # return 1 00:18:19.297 22:59:56 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:18:19.297 1+0 records in 00:18:19.297 1+0 records out 00:18:19.297 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00380624 s, 275 MB/s 00:18:19.297 22:59:56 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:18:19.297 22:59:56 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:18:19.297 22:59:56 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n3 00:18:19.297 22:59:56 -- scripts/common.sh@381 -- # local block=/dev/nvme2n3 pt 00:18:19.297 22:59:56 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:18:19.297 No valid GPT data, bailing 00:18:19.297 22:59:57 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:18:19.297 22:59:57 -- scripts/common.sh@394 -- # pt= 00:18:19.297 22:59:57 -- scripts/common.sh@395 -- # return 1 00:18:19.297 22:59:57 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:18:19.297 1+0 records in 00:18:19.297 1+0 records out 00:18:19.297 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00357475 s, 293 MB/s 00:18:19.297 22:59:57 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:18:19.297 22:59:57 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:18:19.297 22:59:57 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:18:19.297 22:59:57 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:18:19.297 22:59:57 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:18:19.297 No valid GPT data, bailing 00:18:19.297 22:59:57 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:18:19.297 22:59:57 -- scripts/common.sh@394 -- # pt= 00:18:19.297 22:59:57 -- scripts/common.sh@395 -- # return 1 00:18:19.297 22:59:57 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:18:19.297 1+0 records in 00:18:19.297 1+0 records out 00:18:19.297 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00441693 s, 237 MB/s 00:18:19.297 22:59:57 -- spdk/autotest.sh@105 -- # sync 00:18:19.297 22:59:57 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:18:19.297 22:59:57 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:18:19.297 22:59:57 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:18:20.672 22:59:58 -- spdk/autotest.sh@111 -- # uname -s 00:18:20.672 22:59:58 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:18:20.672 22:59:58 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:18:20.672 22:59:58 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:18:20.672 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:21.239 Hugepages 00:18:21.239 node hugesize free / total 00:18:21.239 node0 1048576kB 0 / 0 00:18:21.239 node0 2048kB 0 / 0 00:18:21.239 00:18:21.239 Type BDF Vendor Device NUMA Driver Device Block devices 00:18:21.239 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:18:21.239 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:18:21.239 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:18:21.239 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:18:21.497 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:18:21.497 22:59:59 -- spdk/autotest.sh@117 -- # uname -s 00:18:21.497 22:59:59 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:18:21.497 22:59:59 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:18:21.497 22:59:59 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:18:21.754 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:22.319 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:18:22.319 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:18:22.319 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:18:22.319 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:18:22.319 23:00:00 -- common/autotest_common.sh@1517 -- # sleep 1 00:18:23.689 23:00:01 -- common/autotest_common.sh@1518 -- # bdfs=() 00:18:23.689 23:00:01 -- common/autotest_common.sh@1518 -- # local bdfs 00:18:23.689 23:00:01 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:18:23.689 23:00:01 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:18:23.689 23:00:01 -- common/autotest_common.sh@1498 -- # bdfs=() 00:18:23.689 23:00:01 -- common/autotest_common.sh@1498 -- # local bdfs 00:18:23.689 23:00:01 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:18:23.689 23:00:01 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:18:23.689 23:00:01 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:18:23.689 23:00:01 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:18:23.689 23:00:01 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:18:23.689 23:00:01 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:18:23.689 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:23.946 Waiting for block devices as requested 00:18:23.946 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:18:23.946 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:18:23.946 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:18:24.201 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:18:29.462 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:18:29.462 23:00:07 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:18:29.462 23:00:07 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:18:29.462 23:00:07 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:18:29.462 23:00:07 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:18:29.462 23:00:07 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:18:29.462 23:00:07 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:18:29.462 23:00:07 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:18:29.462 23:00:07 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:18:29.462 23:00:07 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:18:29.462 23:00:07 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:18:29.462 23:00:07 -- common/autotest_common.sh@1531 -- # grep oacs 00:18:29.462 23:00:07 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:18:29.462 23:00:07 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:18:29.462 23:00:07 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:18:29.462 23:00:07 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:18:29.462 23:00:07 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:18:29.462 23:00:07 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:18:29.462 23:00:07 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:18:29.462 23:00:07 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:18:29.462 23:00:07 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:18:29.462 23:00:07 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:18:29.462 23:00:07 -- common/autotest_common.sh@1543 -- # continue 00:18:29.462 23:00:07 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:18:29.462 23:00:07 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:18:29.462 23:00:07 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:18:29.462 23:00:07 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:18:29.462 23:00:07 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:18:29.462 23:00:07 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:18:29.462 23:00:07 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:18:29.462 23:00:07 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:18:29.462 23:00:07 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:18:29.462 23:00:07 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:18:29.462 23:00:07 -- common/autotest_common.sh@1531 -- # grep oacs 00:18:29.462 23:00:07 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:18:29.462 23:00:07 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:18:29.462 23:00:07 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:18:29.462 23:00:07 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:18:29.462 23:00:07 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:18:29.462 23:00:07 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:18:29.462 23:00:07 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:18:29.462 23:00:07 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:18:29.462 23:00:07 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:18:29.462 23:00:07 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:18:29.462 23:00:07 -- common/autotest_common.sh@1543 -- # continue 00:18:29.462 23:00:07 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:18:29.462 23:00:07 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:18:29.462 23:00:07 -- common/autotest_common.sh@1487 -- # grep 0000:00:12.0/nvme/nvme 00:18:29.462 23:00:07 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:18:29.462 23:00:07 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:18:29.462 23:00:07 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:18:29.462 23:00:07 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:18:29.462 23:00:07 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme2 00:18:29.462 23:00:07 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme2 00:18:29.462 23:00:07 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme2 ]] 00:18:29.462 23:00:07 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme2 00:18:29.462 23:00:07 -- common/autotest_common.sh@1531 -- # grep oacs 00:18:29.462 23:00:07 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:18:29.462 23:00:07 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:18:29.462 23:00:07 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:18:29.462 23:00:07 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:18:29.462 23:00:07 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:18:29.462 23:00:07 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2 00:18:29.462 23:00:07 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:18:29.462 23:00:07 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:18:29.462 23:00:07 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:18:29.462 23:00:07 -- common/autotest_common.sh@1543 -- # continue 00:18:29.462 23:00:07 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:18:29.462 23:00:07 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:18:29.462 23:00:07 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:18:29.462 23:00:07 -- common/autotest_common.sh@1487 -- # grep 0000:00:13.0/nvme/nvme 00:18:29.462 23:00:07 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:18:29.462 23:00:07 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:18:29.462 23:00:07 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:18:29.463 23:00:07 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme3 00:18:29.463 23:00:07 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme3 00:18:29.463 23:00:07 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme3 ]] 00:18:29.463 23:00:07 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme3 00:18:29.463 23:00:07 -- common/autotest_common.sh@1531 -- # grep oacs 00:18:29.463 23:00:07 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:18:29.463 23:00:07 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:18:29.463 23:00:07 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:18:29.463 23:00:07 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:18:29.463 23:00:07 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:18:29.463 23:00:07 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:18:29.463 23:00:07 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme3 00:18:29.463 23:00:07 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:18:29.463 23:00:07 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:18:29.463 23:00:07 -- common/autotest_common.sh@1543 -- # continue 00:18:29.463 23:00:07 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:18:29.463 23:00:07 -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:29.463 23:00:07 -- common/autotest_common.sh@10 -- # set +x 00:18:29.463 23:00:07 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:18:29.463 23:00:07 -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:29.463 23:00:07 -- common/autotest_common.sh@10 -- # set +x 00:18:29.463 23:00:07 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:18:29.719 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:30.287 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:18:30.287 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:18:30.287 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:18:30.287 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:18:30.287 23:00:08 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:18:30.287 23:00:08 -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:30.287 23:00:08 -- common/autotest_common.sh@10 -- # set +x 00:18:30.287 23:00:08 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:18:30.287 23:00:08 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:18:30.287 23:00:08 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:18:30.287 23:00:08 -- common/autotest_common.sh@1563 -- # bdfs=() 00:18:30.287 23:00:08 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:18:30.287 23:00:08 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:18:30.287 23:00:08 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:18:30.287 23:00:08 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:18:30.287 23:00:08 -- common/autotest_common.sh@1498 -- # bdfs=() 00:18:30.287 23:00:08 -- common/autotest_common.sh@1498 -- # local bdfs 00:18:30.287 23:00:08 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:18:30.287 23:00:08 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:18:30.287 23:00:08 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:18:30.287 23:00:08 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:18:30.287 23:00:08 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:18:30.287 23:00:08 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:18:30.287 23:00:08 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:18:30.287 23:00:08 -- common/autotest_common.sh@1566 -- # device=0x0010 00:18:30.287 23:00:08 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:18:30.287 23:00:08 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:18:30.287 23:00:08 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:18:30.287 23:00:08 -- common/autotest_common.sh@1566 -- # device=0x0010 00:18:30.287 23:00:08 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:18:30.287 23:00:08 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:18:30.287 23:00:08 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:18:30.544 23:00:08 -- common/autotest_common.sh@1566 -- # device=0x0010 00:18:30.544 23:00:08 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:18:30.544 23:00:08 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:18:30.544 23:00:08 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:18:30.544 23:00:08 -- common/autotest_common.sh@1566 -- # device=0x0010 00:18:30.544 23:00:08 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:18:30.544 23:00:08 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:18:30.544 23:00:08 -- common/autotest_common.sh@1572 -- # return 0 00:18:30.544 23:00:08 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:18:30.544 23:00:08 -- common/autotest_common.sh@1580 -- # return 0 00:18:30.544 23:00:08 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:18:30.544 23:00:08 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:18:30.544 23:00:08 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:18:30.544 23:00:08 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:18:30.544 23:00:08 -- spdk/autotest.sh@149 -- # timing_enter lib 00:18:30.544 23:00:08 -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:30.544 23:00:08 -- common/autotest_common.sh@10 -- # set +x 00:18:30.544 23:00:08 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:18:30.544 23:00:08 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:18:30.544 23:00:08 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:30.544 23:00:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:30.544 23:00:08 -- common/autotest_common.sh@10 -- # set +x 00:18:30.544 ************************************ 00:18:30.544 START TEST env 00:18:30.544 ************************************ 00:18:30.544 23:00:08 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:18:30.544 * Looking for test storage... 00:18:30.544 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:18:30.544 23:00:08 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:30.544 23:00:08 env -- common/autotest_common.sh@1711 -- # lcov --version 00:18:30.544 23:00:08 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:30.544 23:00:08 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:30.544 23:00:08 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:30.545 23:00:08 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:30.545 23:00:08 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:30.545 23:00:08 env -- scripts/common.sh@336 -- # IFS=.-: 00:18:30.545 23:00:08 env -- scripts/common.sh@336 -- # read -ra ver1 00:18:30.545 23:00:08 env -- scripts/common.sh@337 -- # IFS=.-: 00:18:30.545 23:00:08 env -- scripts/common.sh@337 -- # read -ra ver2 00:18:30.545 23:00:08 env -- scripts/common.sh@338 -- # local 'op=<' 00:18:30.545 23:00:08 env -- scripts/common.sh@340 -- # ver1_l=2 00:18:30.545 23:00:08 env -- scripts/common.sh@341 -- # ver2_l=1 00:18:30.545 23:00:08 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:30.545 23:00:08 env -- scripts/common.sh@344 -- # case "$op" in 00:18:30.545 23:00:08 env -- scripts/common.sh@345 -- # : 1 00:18:30.545 23:00:08 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:30.545 23:00:08 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:30.545 23:00:08 env -- scripts/common.sh@365 -- # decimal 1 00:18:30.545 23:00:08 env -- scripts/common.sh@353 -- # local d=1 00:18:30.545 23:00:08 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:30.545 23:00:08 env -- scripts/common.sh@355 -- # echo 1 00:18:30.545 23:00:08 env -- scripts/common.sh@365 -- # ver1[v]=1 00:18:30.545 23:00:08 env -- scripts/common.sh@366 -- # decimal 2 00:18:30.545 23:00:08 env -- scripts/common.sh@353 -- # local d=2 00:18:30.545 23:00:08 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:30.545 23:00:08 env -- scripts/common.sh@355 -- # echo 2 00:18:30.545 23:00:08 env -- scripts/common.sh@366 -- # ver2[v]=2 00:18:30.545 23:00:08 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:30.545 23:00:08 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:30.545 23:00:08 env -- scripts/common.sh@368 -- # return 0 00:18:30.545 23:00:08 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:30.545 23:00:08 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:30.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:30.545 --rc genhtml_branch_coverage=1 00:18:30.545 --rc genhtml_function_coverage=1 00:18:30.545 --rc genhtml_legend=1 00:18:30.545 --rc geninfo_all_blocks=1 00:18:30.545 --rc geninfo_unexecuted_blocks=1 00:18:30.545 00:18:30.545 ' 00:18:30.545 23:00:08 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:30.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:30.545 --rc genhtml_branch_coverage=1 00:18:30.545 --rc genhtml_function_coverage=1 00:18:30.545 --rc genhtml_legend=1 00:18:30.545 --rc geninfo_all_blocks=1 00:18:30.545 --rc geninfo_unexecuted_blocks=1 00:18:30.545 00:18:30.545 ' 00:18:30.545 23:00:08 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:30.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:30.545 --rc genhtml_branch_coverage=1 00:18:30.545 --rc genhtml_function_coverage=1 00:18:30.545 --rc genhtml_legend=1 00:18:30.545 --rc geninfo_all_blocks=1 00:18:30.545 --rc geninfo_unexecuted_blocks=1 00:18:30.545 00:18:30.545 ' 00:18:30.545 23:00:08 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:30.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:30.545 --rc genhtml_branch_coverage=1 00:18:30.545 --rc genhtml_function_coverage=1 00:18:30.545 --rc genhtml_legend=1 00:18:30.545 --rc geninfo_all_blocks=1 00:18:30.545 --rc geninfo_unexecuted_blocks=1 00:18:30.545 00:18:30.545 ' 00:18:30.545 23:00:08 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:18:30.545 23:00:08 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:30.545 23:00:08 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:30.545 23:00:08 env -- common/autotest_common.sh@10 -- # set +x 00:18:30.545 ************************************ 00:18:30.545 START TEST env_memory 00:18:30.545 ************************************ 00:18:30.545 23:00:08 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:18:30.545 00:18:30.545 00:18:30.545 CUnit - A unit testing framework for C - Version 2.1-3 00:18:30.545 http://cunit.sourceforge.net/ 00:18:30.545 00:18:30.545 00:18:30.545 Suite: memory 00:18:30.545 Test: alloc and free memory map ...[2024-12-09 23:00:08.972848] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:18:30.545 passed 00:18:30.805 Test: mem map translation ...[2024-12-09 23:00:09.006407] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:18:30.805 [2024-12-09 23:00:09.006732] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:18:30.805 [2024-12-09 23:00:09.006919] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:18:30.805 [2024-12-09 23:00:09.007063] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:18:30.805 passed 00:18:30.805 Test: mem map registration ...[2024-12-09 23:00:09.062780] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:18:30.805 [2024-12-09 23:00:09.062997] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:18:30.805 passed 00:18:30.805 Test: mem map adjacent registrations ...passed 00:18:30.805 00:18:30.805 Run Summary: Type Total Ran Passed Failed Inactive 00:18:30.805 suites 1 1 n/a 0 0 00:18:30.805 tests 4 4 4 0 0 00:18:30.805 asserts 152 152 152 0 n/a 00:18:30.805 00:18:30.805 Elapsed time = 0.194 seconds 00:18:30.805 00:18:30.805 real 0m0.233s 00:18:30.805 user 0m0.200s 00:18:30.805 sys 0m0.023s 00:18:30.805 ************************************ 00:18:30.805 END TEST env_memory 00:18:30.805 ************************************ 00:18:30.805 23:00:09 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:30.805 23:00:09 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:18:30.805 23:00:09 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:18:30.805 23:00:09 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:30.805 23:00:09 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:30.805 23:00:09 env -- common/autotest_common.sh@10 -- # set +x 00:18:30.805 ************************************ 00:18:30.805 START TEST env_vtophys 00:18:30.805 ************************************ 00:18:30.805 23:00:09 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:18:30.805 EAL: lib.eal log level changed from notice to debug 00:18:30.805 EAL: Detected lcore 0 as core 0 on socket 0 00:18:30.805 EAL: Detected lcore 1 as core 0 on socket 0 00:18:30.805 EAL: Detected lcore 2 as core 0 on socket 0 00:18:30.805 EAL: Detected lcore 3 as core 0 on socket 0 00:18:30.805 EAL: Detected lcore 4 as core 0 on socket 0 00:18:30.805 EAL: Detected lcore 5 as core 0 on socket 0 00:18:30.805 EAL: Detected lcore 6 as core 0 on socket 0 00:18:30.805 EAL: Detected lcore 7 as core 0 on socket 0 00:18:30.805 EAL: Detected lcore 8 as core 0 on socket 0 00:18:30.805 EAL: Detected lcore 9 as core 0 on socket 0 00:18:30.805 EAL: Maximum logical cores by configuration: 128 00:18:30.805 EAL: Detected CPU lcores: 10 00:18:30.805 EAL: Detected NUMA nodes: 1 00:18:30.805 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:18:30.805 EAL: Detected shared linkage of DPDK 00:18:30.805 EAL: No shared files mode enabled, IPC will be disabled 00:18:30.805 EAL: Selected IOVA mode 'PA' 00:18:30.805 EAL: Probing VFIO support... 00:18:30.805 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:18:30.805 EAL: VFIO modules not loaded, skipping VFIO support... 00:18:30.805 EAL: Ask a virtual area of 0x2e000 bytes 00:18:30.805 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:18:30.805 EAL: Setting up physically contiguous memory... 00:18:30.805 EAL: Setting maximum number of open files to 524288 00:18:30.805 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:18:30.805 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:18:30.806 EAL: Ask a virtual area of 0x61000 bytes 00:18:30.806 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:18:30.806 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:18:30.806 EAL: Ask a virtual area of 0x400000000 bytes 00:18:30.806 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:18:30.806 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:18:30.806 EAL: Ask a virtual area of 0x61000 bytes 00:18:30.806 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:18:30.806 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:18:30.806 EAL: Ask a virtual area of 0x400000000 bytes 00:18:30.806 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:18:30.806 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:18:30.806 EAL: Ask a virtual area of 0x61000 bytes 00:18:30.806 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:18:30.806 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:18:30.806 EAL: Ask a virtual area of 0x400000000 bytes 00:18:30.806 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:18:30.806 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:18:30.806 EAL: Ask a virtual area of 0x61000 bytes 00:18:30.806 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:18:30.806 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:18:30.806 EAL: Ask a virtual area of 0x400000000 bytes 00:18:30.806 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:18:30.806 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:18:30.806 EAL: Hugepages will be freed exactly as allocated. 00:18:30.806 EAL: No shared files mode enabled, IPC is disabled 00:18:30.806 EAL: No shared files mode enabled, IPC is disabled 00:18:31.063 EAL: TSC frequency is ~2600000 KHz 00:18:31.063 EAL: Main lcore 0 is ready (tid=7f86692d2a40;cpuset=[0]) 00:18:31.063 EAL: Trying to obtain current memory policy. 00:18:31.063 EAL: Setting policy MPOL_PREFERRED for socket 0 00:18:31.063 EAL: Restoring previous memory policy: 0 00:18:31.063 EAL: request: mp_malloc_sync 00:18:31.063 EAL: No shared files mode enabled, IPC is disabled 00:18:31.063 EAL: Heap on socket 0 was expanded by 2MB 00:18:31.063 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:18:31.063 EAL: No PCI address specified using 'addr=' in: bus=pci 00:18:31.063 EAL: Mem event callback 'spdk:(nil)' registered 00:18:31.063 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:18:31.063 00:18:31.063 00:18:31.063 CUnit - A unit testing framework for C - Version 2.1-3 00:18:31.063 http://cunit.sourceforge.net/ 00:18:31.063 00:18:31.063 00:18:31.063 Suite: components_suite 00:18:31.320 Test: vtophys_malloc_test ...passed 00:18:31.320 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:18:31.320 EAL: Setting policy MPOL_PREFERRED for socket 0 00:18:31.320 EAL: Restoring previous memory policy: 4 00:18:31.320 EAL: Calling mem event callback 'spdk:(nil)' 00:18:31.320 EAL: request: mp_malloc_sync 00:18:31.320 EAL: No shared files mode enabled, IPC is disabled 00:18:31.320 EAL: Heap on socket 0 was expanded by 4MB 00:18:31.320 EAL: Calling mem event callback 'spdk:(nil)' 00:18:31.320 EAL: request: mp_malloc_sync 00:18:31.320 EAL: No shared files mode enabled, IPC is disabled 00:18:31.320 EAL: Heap on socket 0 was shrunk by 4MB 00:18:31.320 EAL: Trying to obtain current memory policy. 00:18:31.320 EAL: Setting policy MPOL_PREFERRED for socket 0 00:18:31.320 EAL: Restoring previous memory policy: 4 00:18:31.320 EAL: Calling mem event callback 'spdk:(nil)' 00:18:31.320 EAL: request: mp_malloc_sync 00:18:31.320 EAL: No shared files mode enabled, IPC is disabled 00:18:31.320 EAL: Heap on socket 0 was expanded by 6MB 00:18:31.320 EAL: Calling mem event callback 'spdk:(nil)' 00:18:31.320 EAL: request: mp_malloc_sync 00:18:31.320 EAL: No shared files mode enabled, IPC is disabled 00:18:31.320 EAL: Heap on socket 0 was shrunk by 6MB 00:18:31.320 EAL: Trying to obtain current memory policy. 00:18:31.320 EAL: Setting policy MPOL_PREFERRED for socket 0 00:18:31.320 EAL: Restoring previous memory policy: 4 00:18:31.320 EAL: Calling mem event callback 'spdk:(nil)' 00:18:31.320 EAL: request: mp_malloc_sync 00:18:31.320 EAL: No shared files mode enabled, IPC is disabled 00:18:31.320 EAL: Heap on socket 0 was expanded by 10MB 00:18:31.320 EAL: Calling mem event callback 'spdk:(nil)' 00:18:31.320 EAL: request: mp_malloc_sync 00:18:31.320 EAL: No shared files mode enabled, IPC is disabled 00:18:31.320 EAL: Heap on socket 0 was shrunk by 10MB 00:18:31.320 EAL: Trying to obtain current memory policy. 00:18:31.320 EAL: Setting policy MPOL_PREFERRED for socket 0 00:18:31.320 EAL: Restoring previous memory policy: 4 00:18:31.320 EAL: Calling mem event callback 'spdk:(nil)' 00:18:31.320 EAL: request: mp_malloc_sync 00:18:31.320 EAL: No shared files mode enabled, IPC is disabled 00:18:31.320 EAL: Heap on socket 0 was expanded by 18MB 00:18:31.577 EAL: Calling mem event callback 'spdk:(nil)' 00:18:31.577 EAL: request: mp_malloc_sync 00:18:31.577 EAL: No shared files mode enabled, IPC is disabled 00:18:31.577 EAL: Heap on socket 0 was shrunk by 18MB 00:18:31.577 EAL: Trying to obtain current memory policy. 00:18:31.577 EAL: Setting policy MPOL_PREFERRED for socket 0 00:18:31.577 EAL: Restoring previous memory policy: 4 00:18:31.577 EAL: Calling mem event callback 'spdk:(nil)' 00:18:31.577 EAL: request: mp_malloc_sync 00:18:31.577 EAL: No shared files mode enabled, IPC is disabled 00:18:31.577 EAL: Heap on socket 0 was expanded by 34MB 00:18:31.577 EAL: Calling mem event callback 'spdk:(nil)' 00:18:31.577 EAL: request: mp_malloc_sync 00:18:31.577 EAL: No shared files mode enabled, IPC is disabled 00:18:31.577 EAL: Heap on socket 0 was shrunk by 34MB 00:18:31.577 EAL: Trying to obtain current memory policy. 00:18:31.577 EAL: Setting policy MPOL_PREFERRED for socket 0 00:18:31.577 EAL: Restoring previous memory policy: 4 00:18:31.577 EAL: Calling mem event callback 'spdk:(nil)' 00:18:31.577 EAL: request: mp_malloc_sync 00:18:31.577 EAL: No shared files mode enabled, IPC is disabled 00:18:31.577 EAL: Heap on socket 0 was expanded by 66MB 00:18:31.577 EAL: Calling mem event callback 'spdk:(nil)' 00:18:31.577 EAL: request: mp_malloc_sync 00:18:31.577 EAL: No shared files mode enabled, IPC is disabled 00:18:31.577 EAL: Heap on socket 0 was shrunk by 66MB 00:18:31.577 EAL: Trying to obtain current memory policy. 00:18:31.577 EAL: Setting policy MPOL_PREFERRED for socket 0 00:18:31.834 EAL: Restoring previous memory policy: 4 00:18:31.834 EAL: Calling mem event callback 'spdk:(nil)' 00:18:31.834 EAL: request: mp_malloc_sync 00:18:31.834 EAL: No shared files mode enabled, IPC is disabled 00:18:31.834 EAL: Heap on socket 0 was expanded by 130MB 00:18:31.834 EAL: Calling mem event callback 'spdk:(nil)' 00:18:31.834 EAL: request: mp_malloc_sync 00:18:31.834 EAL: No shared files mode enabled, IPC is disabled 00:18:31.834 EAL: Heap on socket 0 was shrunk by 130MB 00:18:32.089 EAL: Trying to obtain current memory policy. 00:18:32.089 EAL: Setting policy MPOL_PREFERRED for socket 0 00:18:32.089 EAL: Restoring previous memory policy: 4 00:18:32.089 EAL: Calling mem event callback 'spdk:(nil)' 00:18:32.089 EAL: request: mp_malloc_sync 00:18:32.089 EAL: No shared files mode enabled, IPC is disabled 00:18:32.089 EAL: Heap on socket 0 was expanded by 258MB 00:18:32.351 EAL: Calling mem event callback 'spdk:(nil)' 00:18:32.351 EAL: request: mp_malloc_sync 00:18:32.351 EAL: No shared files mode enabled, IPC is disabled 00:18:32.351 EAL: Heap on socket 0 was shrunk by 258MB 00:18:32.607 EAL: Trying to obtain current memory policy. 00:18:32.607 EAL: Setting policy MPOL_PREFERRED for socket 0 00:18:32.607 EAL: Restoring previous memory policy: 4 00:18:32.607 EAL: Calling mem event callback 'spdk:(nil)' 00:18:32.607 EAL: request: mp_malloc_sync 00:18:32.607 EAL: No shared files mode enabled, IPC is disabled 00:18:32.607 EAL: Heap on socket 0 was expanded by 514MB 00:18:33.172 EAL: Calling mem event callback 'spdk:(nil)' 00:18:33.431 EAL: request: mp_malloc_sync 00:18:33.431 EAL: No shared files mode enabled, IPC is disabled 00:18:33.431 EAL: Heap on socket 0 was shrunk by 514MB 00:18:34.004 EAL: Trying to obtain current memory policy. 00:18:34.004 EAL: Setting policy MPOL_PREFERRED for socket 0 00:18:34.004 EAL: Restoring previous memory policy: 4 00:18:34.004 EAL: Calling mem event callback 'spdk:(nil)' 00:18:34.004 EAL: request: mp_malloc_sync 00:18:34.004 EAL: No shared files mode enabled, IPC is disabled 00:18:34.004 EAL: Heap on socket 0 was expanded by 1026MB 00:18:35.006 EAL: Calling mem event callback 'spdk:(nil)' 00:18:35.264 EAL: request: mp_malloc_sync 00:18:35.264 EAL: No shared files mode enabled, IPC is disabled 00:18:35.264 EAL: Heap on socket 0 was shrunk by 1026MB 00:18:36.200 passed 00:18:36.200 00:18:36.200 Run Summary: Type Total Ran Passed Failed Inactive 00:18:36.200 suites 1 1 n/a 0 0 00:18:36.200 tests 2 2 2 0 0 00:18:36.200 asserts 5880 5880 5880 0 n/a 00:18:36.200 00:18:36.200 Elapsed time = 4.953 seconds 00:18:36.200 EAL: Calling mem event callback 'spdk:(nil)' 00:18:36.200 EAL: request: mp_malloc_sync 00:18:36.200 EAL: No shared files mode enabled, IPC is disabled 00:18:36.200 EAL: Heap on socket 0 was shrunk by 2MB 00:18:36.200 EAL: No shared files mode enabled, IPC is disabled 00:18:36.200 EAL: No shared files mode enabled, IPC is disabled 00:18:36.200 EAL: No shared files mode enabled, IPC is disabled 00:18:36.200 00:18:36.200 real 0m5.221s 00:18:36.200 user 0m4.391s 00:18:36.200 sys 0m0.681s 00:18:36.200 23:00:14 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:36.200 23:00:14 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:18:36.200 ************************************ 00:18:36.200 END TEST env_vtophys 00:18:36.200 ************************************ 00:18:36.200 23:00:14 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:18:36.200 23:00:14 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:36.200 23:00:14 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:36.200 23:00:14 env -- common/autotest_common.sh@10 -- # set +x 00:18:36.200 ************************************ 00:18:36.200 START TEST env_pci 00:18:36.200 ************************************ 00:18:36.200 23:00:14 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:18:36.200 00:18:36.200 00:18:36.200 CUnit - A unit testing framework for C - Version 2.1-3 00:18:36.200 http://cunit.sourceforge.net/ 00:18:36.200 00:18:36.200 00:18:36.200 Suite: pci 00:18:36.200 Test: pci_hook ...[2024-12-09 23:00:14.490831] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 57006 has claimed it 00:18:36.200 EAL: Cannot find device (10000:00:01.0) 00:18:36.200 passed 00:18:36.200 00:18:36.200 Run Summary: Type Total Ran Passed Failed Inactive 00:18:36.200 suites 1 1 n/a 0 0 00:18:36.200 tests 1 1 1 0 0 00:18:36.200 asserts 25 25 25 0 n/a 00:18:36.200 00:18:36.200 Elapsed time = 0.005 seconds 00:18:36.200 EAL: Failed to attach device on primary process 00:18:36.200 00:18:36.200 real 0m0.068s 00:18:36.200 user 0m0.034s 00:18:36.200 sys 0m0.032s 00:18:36.200 23:00:14 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:36.200 23:00:14 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:18:36.200 ************************************ 00:18:36.200 END TEST env_pci 00:18:36.200 ************************************ 00:18:36.200 23:00:14 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:18:36.200 23:00:14 env -- env/env.sh@15 -- # uname 00:18:36.200 23:00:14 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:18:36.200 23:00:14 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:18:36.200 23:00:14 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:18:36.200 23:00:14 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:18:36.200 23:00:14 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:36.200 23:00:14 env -- common/autotest_common.sh@10 -- # set +x 00:18:36.200 ************************************ 00:18:36.200 START TEST env_dpdk_post_init 00:18:36.200 ************************************ 00:18:36.200 23:00:14 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:18:36.200 EAL: Detected CPU lcores: 10 00:18:36.200 EAL: Detected NUMA nodes: 1 00:18:36.200 EAL: Detected shared linkage of DPDK 00:18:36.200 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:18:36.200 EAL: Selected IOVA mode 'PA' 00:18:36.457 TELEMETRY: No legacy callbacks, legacy socket not created 00:18:36.457 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:18:36.457 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:18:36.457 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:18:36.457 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:18:36.457 Starting DPDK initialization... 00:18:36.457 Starting SPDK post initialization... 00:18:36.457 SPDK NVMe probe 00:18:36.457 Attaching to 0000:00:10.0 00:18:36.457 Attaching to 0000:00:11.0 00:18:36.457 Attaching to 0000:00:12.0 00:18:36.457 Attaching to 0000:00:13.0 00:18:36.458 Attached to 0000:00:10.0 00:18:36.458 Attached to 0000:00:11.0 00:18:36.458 Attached to 0000:00:13.0 00:18:36.458 Attached to 0000:00:12.0 00:18:36.458 Cleaning up... 00:18:36.458 00:18:36.458 real 0m0.231s 00:18:36.458 user 0m0.076s 00:18:36.458 sys 0m0.057s 00:18:36.458 23:00:14 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:36.458 ************************************ 00:18:36.458 END TEST env_dpdk_post_init 00:18:36.458 ************************************ 00:18:36.458 23:00:14 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:18:36.458 23:00:14 env -- env/env.sh@26 -- # uname 00:18:36.458 23:00:14 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:18:36.458 23:00:14 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:18:36.458 23:00:14 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:36.458 23:00:14 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:36.458 23:00:14 env -- common/autotest_common.sh@10 -- # set +x 00:18:36.458 ************************************ 00:18:36.458 START TEST env_mem_callbacks 00:18:36.458 ************************************ 00:18:36.458 23:00:14 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:18:36.458 EAL: Detected CPU lcores: 10 00:18:36.458 EAL: Detected NUMA nodes: 1 00:18:36.458 EAL: Detected shared linkage of DPDK 00:18:36.458 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:18:36.458 EAL: Selected IOVA mode 'PA' 00:18:36.722 00:18:36.722 00:18:36.722 CUnit - A unit testing framework for C - Version 2.1-3 00:18:36.722 http://cunit.sourceforge.net/ 00:18:36.723 00:18:36.723 00:18:36.723 Suite: memory 00:18:36.723 Test: test ... 00:18:36.723 register 0x200000200000 2097152 00:18:36.723 malloc 3145728 00:18:36.723 TELEMETRY: No legacy callbacks, legacy socket not created 00:18:36.723 register 0x200000400000 4194304 00:18:36.723 buf 0x2000004fffc0 len 3145728 PASSED 00:18:36.723 malloc 64 00:18:36.723 buf 0x2000004ffec0 len 64 PASSED 00:18:36.723 malloc 4194304 00:18:36.723 register 0x200000800000 6291456 00:18:36.723 buf 0x2000009fffc0 len 4194304 PASSED 00:18:36.723 free 0x2000004fffc0 3145728 00:18:36.723 free 0x2000004ffec0 64 00:18:36.723 unregister 0x200000400000 4194304 PASSED 00:18:36.723 free 0x2000009fffc0 4194304 00:18:36.723 unregister 0x200000800000 6291456 PASSED 00:18:36.723 malloc 8388608 00:18:36.723 register 0x200000400000 10485760 00:18:36.723 buf 0x2000005fffc0 len 8388608 PASSED 00:18:36.723 free 0x2000005fffc0 8388608 00:18:36.723 unregister 0x200000400000 10485760 PASSED 00:18:36.723 passed 00:18:36.723 00:18:36.723 Run Summary: Type Total Ran Passed Failed Inactive 00:18:36.723 suites 1 1 n/a 0 0 00:18:36.723 tests 1 1 1 0 0 00:18:36.723 asserts 15 15 15 0 n/a 00:18:36.723 00:18:36.723 Elapsed time = 0.046 seconds 00:18:36.723 00:18:36.723 real 0m0.215s 00:18:36.723 user 0m0.062s 00:18:36.723 sys 0m0.051s 00:18:36.723 23:00:15 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:36.723 23:00:15 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:18:36.723 ************************************ 00:18:36.723 END TEST env_mem_callbacks 00:18:36.723 ************************************ 00:18:36.723 00:18:36.723 real 0m6.316s 00:18:36.723 user 0m4.916s 00:18:36.723 sys 0m1.043s 00:18:36.723 23:00:15 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:36.723 23:00:15 env -- common/autotest_common.sh@10 -- # set +x 00:18:36.723 ************************************ 00:18:36.723 END TEST env 00:18:36.723 ************************************ 00:18:36.723 23:00:15 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:18:36.723 23:00:15 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:36.723 23:00:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:36.723 23:00:15 -- common/autotest_common.sh@10 -- # set +x 00:18:36.723 ************************************ 00:18:36.723 START TEST rpc 00:18:36.723 ************************************ 00:18:36.723 23:00:15 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:18:36.723 * Looking for test storage... 00:18:36.982 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:18:36.982 23:00:15 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:36.982 23:00:15 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:18:36.982 23:00:15 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:36.982 23:00:15 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:36.982 23:00:15 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:36.982 23:00:15 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:36.982 23:00:15 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:36.982 23:00:15 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:18:36.982 23:00:15 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:18:36.982 23:00:15 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:18:36.982 23:00:15 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:18:36.982 23:00:15 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:18:36.982 23:00:15 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:18:36.982 23:00:15 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:18:36.982 23:00:15 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:36.982 23:00:15 rpc -- scripts/common.sh@344 -- # case "$op" in 00:18:36.982 23:00:15 rpc -- scripts/common.sh@345 -- # : 1 00:18:36.982 23:00:15 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:36.982 23:00:15 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:36.982 23:00:15 rpc -- scripts/common.sh@365 -- # decimal 1 00:18:36.982 23:00:15 rpc -- scripts/common.sh@353 -- # local d=1 00:18:36.982 23:00:15 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:36.982 23:00:15 rpc -- scripts/common.sh@355 -- # echo 1 00:18:36.982 23:00:15 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:18:36.982 23:00:15 rpc -- scripts/common.sh@366 -- # decimal 2 00:18:36.982 23:00:15 rpc -- scripts/common.sh@353 -- # local d=2 00:18:36.982 23:00:15 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:36.982 23:00:15 rpc -- scripts/common.sh@355 -- # echo 2 00:18:36.982 23:00:15 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:18:36.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:36.982 23:00:15 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:36.982 23:00:15 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:36.982 23:00:15 rpc -- scripts/common.sh@368 -- # return 0 00:18:36.982 23:00:15 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:36.982 23:00:15 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:36.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:36.982 --rc genhtml_branch_coverage=1 00:18:36.982 --rc genhtml_function_coverage=1 00:18:36.982 --rc genhtml_legend=1 00:18:36.982 --rc geninfo_all_blocks=1 00:18:36.982 --rc geninfo_unexecuted_blocks=1 00:18:36.982 00:18:36.982 ' 00:18:36.982 23:00:15 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:36.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:36.982 --rc genhtml_branch_coverage=1 00:18:36.982 --rc genhtml_function_coverage=1 00:18:36.982 --rc genhtml_legend=1 00:18:36.982 --rc geninfo_all_blocks=1 00:18:36.982 --rc geninfo_unexecuted_blocks=1 00:18:36.982 00:18:36.982 ' 00:18:36.982 23:00:15 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:36.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:36.982 --rc genhtml_branch_coverage=1 00:18:36.982 --rc genhtml_function_coverage=1 00:18:36.982 --rc genhtml_legend=1 00:18:36.982 --rc geninfo_all_blocks=1 00:18:36.982 --rc geninfo_unexecuted_blocks=1 00:18:36.982 00:18:36.982 ' 00:18:36.982 23:00:15 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:36.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:36.982 --rc genhtml_branch_coverage=1 00:18:36.982 --rc genhtml_function_coverage=1 00:18:36.982 --rc genhtml_legend=1 00:18:36.982 --rc geninfo_all_blocks=1 00:18:36.982 --rc geninfo_unexecuted_blocks=1 00:18:36.982 00:18:36.982 ' 00:18:36.982 23:00:15 rpc -- rpc/rpc.sh@65 -- # spdk_pid=57133 00:18:36.982 23:00:15 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:18:36.982 23:00:15 rpc -- rpc/rpc.sh@67 -- # waitforlisten 57133 00:18:36.982 23:00:15 rpc -- common/autotest_common.sh@835 -- # '[' -z 57133 ']' 00:18:36.982 23:00:15 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:18:36.982 23:00:15 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:36.982 23:00:15 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:36.982 23:00:15 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:36.982 23:00:15 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:36.982 23:00:15 rpc -- common/autotest_common.sh@10 -- # set +x 00:18:36.982 [2024-12-09 23:00:15.339661] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:18:36.982 [2024-12-09 23:00:15.339935] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57133 ] 00:18:37.238 [2024-12-09 23:00:15.499843] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:37.238 [2024-12-09 23:00:15.600479] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:18:37.238 [2024-12-09 23:00:15.600665] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 57133' to capture a snapshot of events at runtime. 00:18:37.238 [2024-12-09 23:00:15.600733] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:37.238 [2024-12-09 23:00:15.600767] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:37.238 [2024-12-09 23:00:15.600786] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid57133 for offline analysis/debug. 00:18:37.238 [2024-12-09 23:00:15.601702] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:37.803 23:00:16 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:37.803 23:00:16 rpc -- common/autotest_common.sh@868 -- # return 0 00:18:37.803 23:00:16 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:18:37.803 23:00:16 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:18:37.803 23:00:16 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:18:37.803 23:00:16 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:18:37.803 23:00:16 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:37.803 23:00:16 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:37.803 23:00:16 rpc -- common/autotest_common.sh@10 -- # set +x 00:18:37.803 ************************************ 00:18:37.803 START TEST rpc_integrity 00:18:37.803 ************************************ 00:18:37.803 23:00:16 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:18:37.803 23:00:16 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:18:37.803 23:00:16 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.803 23:00:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:18:37.803 23:00:16 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.803 23:00:16 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:18:37.803 23:00:16 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:18:37.803 23:00:16 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:18:37.803 23:00:16 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:18:37.803 23:00:16 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.803 23:00:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:18:37.803 23:00:16 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.803 23:00:16 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:18:37.803 23:00:16 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:18:37.803 23:00:16 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.803 23:00:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:18:38.061 23:00:16 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.061 23:00:16 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:18:38.061 { 00:18:38.061 "name": "Malloc0", 00:18:38.061 "aliases": [ 00:18:38.061 "4a7e3cc4-dda1-4773-b80e-5e210f18ef6d" 00:18:38.061 ], 00:18:38.061 "product_name": "Malloc disk", 00:18:38.061 "block_size": 512, 00:18:38.061 "num_blocks": 16384, 00:18:38.061 "uuid": "4a7e3cc4-dda1-4773-b80e-5e210f18ef6d", 00:18:38.061 "assigned_rate_limits": { 00:18:38.061 "rw_ios_per_sec": 0, 00:18:38.061 "rw_mbytes_per_sec": 0, 00:18:38.061 "r_mbytes_per_sec": 0, 00:18:38.061 "w_mbytes_per_sec": 0 00:18:38.061 }, 00:18:38.061 "claimed": false, 00:18:38.061 "zoned": false, 00:18:38.061 "supported_io_types": { 00:18:38.061 "read": true, 00:18:38.061 "write": true, 00:18:38.061 "unmap": true, 00:18:38.061 "flush": true, 00:18:38.061 "reset": true, 00:18:38.061 "nvme_admin": false, 00:18:38.061 "nvme_io": false, 00:18:38.061 "nvme_io_md": false, 00:18:38.061 "write_zeroes": true, 00:18:38.061 "zcopy": true, 00:18:38.061 "get_zone_info": false, 00:18:38.061 "zone_management": false, 00:18:38.061 "zone_append": false, 00:18:38.061 "compare": false, 00:18:38.061 "compare_and_write": false, 00:18:38.061 "abort": true, 00:18:38.061 "seek_hole": false, 00:18:38.061 "seek_data": false, 00:18:38.061 "copy": true, 00:18:38.061 "nvme_iov_md": false 00:18:38.061 }, 00:18:38.061 "memory_domains": [ 00:18:38.061 { 00:18:38.061 "dma_device_id": "system", 00:18:38.061 "dma_device_type": 1 00:18:38.061 }, 00:18:38.061 { 00:18:38.061 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:38.061 "dma_device_type": 2 00:18:38.061 } 00:18:38.061 ], 00:18:38.061 "driver_specific": {} 00:18:38.061 } 00:18:38.061 ]' 00:18:38.061 23:00:16 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:18:38.061 23:00:16 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:18:38.061 23:00:16 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:18:38.061 23:00:16 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.061 23:00:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:18:38.061 [2024-12-09 23:00:16.312170] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:18:38.061 [2024-12-09 23:00:16.312239] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:38.061 [2024-12-09 23:00:16.312264] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:18:38.061 [2024-12-09 23:00:16.312276] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:38.061 [2024-12-09 23:00:16.314540] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:38.061 [2024-12-09 23:00:16.314581] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:18:38.061 Passthru0 00:18:38.061 23:00:16 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.061 23:00:16 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:18:38.061 23:00:16 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.061 23:00:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:18:38.062 23:00:16 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.062 23:00:16 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:18:38.062 { 00:18:38.062 "name": "Malloc0", 00:18:38.062 "aliases": [ 00:18:38.062 "4a7e3cc4-dda1-4773-b80e-5e210f18ef6d" 00:18:38.062 ], 00:18:38.062 "product_name": "Malloc disk", 00:18:38.062 "block_size": 512, 00:18:38.062 "num_blocks": 16384, 00:18:38.062 "uuid": "4a7e3cc4-dda1-4773-b80e-5e210f18ef6d", 00:18:38.062 "assigned_rate_limits": { 00:18:38.062 "rw_ios_per_sec": 0, 00:18:38.062 "rw_mbytes_per_sec": 0, 00:18:38.062 "r_mbytes_per_sec": 0, 00:18:38.062 "w_mbytes_per_sec": 0 00:18:38.062 }, 00:18:38.062 "claimed": true, 00:18:38.062 "claim_type": "exclusive_write", 00:18:38.062 "zoned": false, 00:18:38.062 "supported_io_types": { 00:18:38.062 "read": true, 00:18:38.062 "write": true, 00:18:38.062 "unmap": true, 00:18:38.062 "flush": true, 00:18:38.062 "reset": true, 00:18:38.062 "nvme_admin": false, 00:18:38.062 "nvme_io": false, 00:18:38.062 "nvme_io_md": false, 00:18:38.062 "write_zeroes": true, 00:18:38.062 "zcopy": true, 00:18:38.062 "get_zone_info": false, 00:18:38.062 "zone_management": false, 00:18:38.062 "zone_append": false, 00:18:38.062 "compare": false, 00:18:38.062 "compare_and_write": false, 00:18:38.062 "abort": true, 00:18:38.062 "seek_hole": false, 00:18:38.062 "seek_data": false, 00:18:38.062 "copy": true, 00:18:38.062 "nvme_iov_md": false 00:18:38.062 }, 00:18:38.062 "memory_domains": [ 00:18:38.062 { 00:18:38.062 "dma_device_id": "system", 00:18:38.062 "dma_device_type": 1 00:18:38.062 }, 00:18:38.062 { 00:18:38.062 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:38.062 "dma_device_type": 2 00:18:38.062 } 00:18:38.062 ], 00:18:38.062 "driver_specific": {} 00:18:38.062 }, 00:18:38.062 { 00:18:38.062 "name": "Passthru0", 00:18:38.062 "aliases": [ 00:18:38.062 "dd18a754-7cb5-56c9-882d-45522007e4b4" 00:18:38.062 ], 00:18:38.062 "product_name": "passthru", 00:18:38.062 "block_size": 512, 00:18:38.062 "num_blocks": 16384, 00:18:38.062 "uuid": "dd18a754-7cb5-56c9-882d-45522007e4b4", 00:18:38.062 "assigned_rate_limits": { 00:18:38.062 "rw_ios_per_sec": 0, 00:18:38.062 "rw_mbytes_per_sec": 0, 00:18:38.062 "r_mbytes_per_sec": 0, 00:18:38.062 "w_mbytes_per_sec": 0 00:18:38.062 }, 00:18:38.062 "claimed": false, 00:18:38.062 "zoned": false, 00:18:38.062 "supported_io_types": { 00:18:38.062 "read": true, 00:18:38.062 "write": true, 00:18:38.062 "unmap": true, 00:18:38.062 "flush": true, 00:18:38.062 "reset": true, 00:18:38.062 "nvme_admin": false, 00:18:38.062 "nvme_io": false, 00:18:38.062 "nvme_io_md": false, 00:18:38.062 "write_zeroes": true, 00:18:38.062 "zcopy": true, 00:18:38.062 "get_zone_info": false, 00:18:38.062 "zone_management": false, 00:18:38.062 "zone_append": false, 00:18:38.062 "compare": false, 00:18:38.062 "compare_and_write": false, 00:18:38.062 "abort": true, 00:18:38.062 "seek_hole": false, 00:18:38.062 "seek_data": false, 00:18:38.062 "copy": true, 00:18:38.062 "nvme_iov_md": false 00:18:38.062 }, 00:18:38.062 "memory_domains": [ 00:18:38.062 { 00:18:38.062 "dma_device_id": "system", 00:18:38.062 "dma_device_type": 1 00:18:38.062 }, 00:18:38.062 { 00:18:38.062 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:38.062 "dma_device_type": 2 00:18:38.062 } 00:18:38.062 ], 00:18:38.062 "driver_specific": { 00:18:38.062 "passthru": { 00:18:38.062 "name": "Passthru0", 00:18:38.062 "base_bdev_name": "Malloc0" 00:18:38.062 } 00:18:38.062 } 00:18:38.062 } 00:18:38.062 ]' 00:18:38.062 23:00:16 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:18:38.062 23:00:16 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:18:38.062 23:00:16 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:18:38.062 23:00:16 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.062 23:00:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:18:38.062 23:00:16 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.062 23:00:16 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:18:38.062 23:00:16 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.062 23:00:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:18:38.062 23:00:16 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.062 23:00:16 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:18:38.062 23:00:16 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.062 23:00:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:18:38.062 23:00:16 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.062 23:00:16 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:18:38.062 23:00:16 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:18:38.062 ************************************ 00:18:38.062 END TEST rpc_integrity 00:18:38.062 ************************************ 00:18:38.062 23:00:16 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:18:38.062 00:18:38.062 real 0m0.252s 00:18:38.062 user 0m0.129s 00:18:38.062 sys 0m0.030s 00:18:38.062 23:00:16 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:38.062 23:00:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:18:38.062 23:00:16 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:18:38.062 23:00:16 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:38.062 23:00:16 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:38.062 23:00:16 rpc -- common/autotest_common.sh@10 -- # set +x 00:18:38.062 ************************************ 00:18:38.062 START TEST rpc_plugins 00:18:38.062 ************************************ 00:18:38.062 23:00:16 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:18:38.062 23:00:16 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:18:38.062 23:00:16 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.062 23:00:16 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:18:38.320 23:00:16 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.320 23:00:16 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:18:38.320 23:00:16 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:18:38.320 23:00:16 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.320 23:00:16 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:18:38.320 23:00:16 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.320 23:00:16 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:18:38.320 { 00:18:38.320 "name": "Malloc1", 00:18:38.320 "aliases": [ 00:18:38.320 "5e82e6ca-7f71-402f-a3ef-2aa8a39a85ed" 00:18:38.320 ], 00:18:38.320 "product_name": "Malloc disk", 00:18:38.320 "block_size": 4096, 00:18:38.320 "num_blocks": 256, 00:18:38.320 "uuid": "5e82e6ca-7f71-402f-a3ef-2aa8a39a85ed", 00:18:38.320 "assigned_rate_limits": { 00:18:38.320 "rw_ios_per_sec": 0, 00:18:38.320 "rw_mbytes_per_sec": 0, 00:18:38.320 "r_mbytes_per_sec": 0, 00:18:38.320 "w_mbytes_per_sec": 0 00:18:38.320 }, 00:18:38.320 "claimed": false, 00:18:38.320 "zoned": false, 00:18:38.320 "supported_io_types": { 00:18:38.320 "read": true, 00:18:38.320 "write": true, 00:18:38.320 "unmap": true, 00:18:38.320 "flush": true, 00:18:38.320 "reset": true, 00:18:38.320 "nvme_admin": false, 00:18:38.320 "nvme_io": false, 00:18:38.320 "nvme_io_md": false, 00:18:38.320 "write_zeroes": true, 00:18:38.320 "zcopy": true, 00:18:38.320 "get_zone_info": false, 00:18:38.320 "zone_management": false, 00:18:38.320 "zone_append": false, 00:18:38.320 "compare": false, 00:18:38.320 "compare_and_write": false, 00:18:38.320 "abort": true, 00:18:38.320 "seek_hole": false, 00:18:38.320 "seek_data": false, 00:18:38.320 "copy": true, 00:18:38.320 "nvme_iov_md": false 00:18:38.320 }, 00:18:38.320 "memory_domains": [ 00:18:38.320 { 00:18:38.320 "dma_device_id": "system", 00:18:38.320 "dma_device_type": 1 00:18:38.320 }, 00:18:38.320 { 00:18:38.320 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:38.320 "dma_device_type": 2 00:18:38.320 } 00:18:38.320 ], 00:18:38.320 "driver_specific": {} 00:18:38.320 } 00:18:38.320 ]' 00:18:38.320 23:00:16 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:18:38.320 23:00:16 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:18:38.320 23:00:16 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:18:38.320 23:00:16 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.320 23:00:16 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:18:38.320 23:00:16 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.320 23:00:16 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:18:38.320 23:00:16 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.320 23:00:16 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:18:38.320 23:00:16 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.320 23:00:16 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:18:38.320 23:00:16 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:18:38.320 ************************************ 00:18:38.320 END TEST rpc_plugins 00:18:38.320 ************************************ 00:18:38.320 23:00:16 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:18:38.320 00:18:38.320 real 0m0.121s 00:18:38.320 user 0m0.062s 00:18:38.320 sys 0m0.017s 00:18:38.320 23:00:16 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:38.320 23:00:16 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:18:38.320 23:00:16 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:18:38.320 23:00:16 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:38.320 23:00:16 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:38.320 23:00:16 rpc -- common/autotest_common.sh@10 -- # set +x 00:18:38.320 ************************************ 00:18:38.320 START TEST rpc_trace_cmd_test 00:18:38.320 ************************************ 00:18:38.320 23:00:16 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:18:38.320 23:00:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:18:38.320 23:00:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:18:38.320 23:00:16 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.320 23:00:16 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:18:38.320 23:00:16 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.320 23:00:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:18:38.320 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid57133", 00:18:38.320 "tpoint_group_mask": "0x8", 00:18:38.320 "iscsi_conn": { 00:18:38.320 "mask": "0x2", 00:18:38.320 "tpoint_mask": "0x0" 00:18:38.320 }, 00:18:38.320 "scsi": { 00:18:38.320 "mask": "0x4", 00:18:38.320 "tpoint_mask": "0x0" 00:18:38.320 }, 00:18:38.320 "bdev": { 00:18:38.320 "mask": "0x8", 00:18:38.320 "tpoint_mask": "0xffffffffffffffff" 00:18:38.320 }, 00:18:38.320 "nvmf_rdma": { 00:18:38.320 "mask": "0x10", 00:18:38.320 "tpoint_mask": "0x0" 00:18:38.320 }, 00:18:38.320 "nvmf_tcp": { 00:18:38.320 "mask": "0x20", 00:18:38.320 "tpoint_mask": "0x0" 00:18:38.320 }, 00:18:38.320 "ftl": { 00:18:38.320 "mask": "0x40", 00:18:38.320 "tpoint_mask": "0x0" 00:18:38.320 }, 00:18:38.320 "blobfs": { 00:18:38.321 "mask": "0x80", 00:18:38.321 "tpoint_mask": "0x0" 00:18:38.321 }, 00:18:38.321 "dsa": { 00:18:38.321 "mask": "0x200", 00:18:38.321 "tpoint_mask": "0x0" 00:18:38.321 }, 00:18:38.321 "thread": { 00:18:38.321 "mask": "0x400", 00:18:38.321 "tpoint_mask": "0x0" 00:18:38.321 }, 00:18:38.321 "nvme_pcie": { 00:18:38.321 "mask": "0x800", 00:18:38.321 "tpoint_mask": "0x0" 00:18:38.321 }, 00:18:38.321 "iaa": { 00:18:38.321 "mask": "0x1000", 00:18:38.321 "tpoint_mask": "0x0" 00:18:38.321 }, 00:18:38.321 "nvme_tcp": { 00:18:38.321 "mask": "0x2000", 00:18:38.321 "tpoint_mask": "0x0" 00:18:38.321 }, 00:18:38.321 "bdev_nvme": { 00:18:38.321 "mask": "0x4000", 00:18:38.321 "tpoint_mask": "0x0" 00:18:38.321 }, 00:18:38.321 "sock": { 00:18:38.321 "mask": "0x8000", 00:18:38.321 "tpoint_mask": "0x0" 00:18:38.321 }, 00:18:38.321 "blob": { 00:18:38.321 "mask": "0x10000", 00:18:38.321 "tpoint_mask": "0x0" 00:18:38.321 }, 00:18:38.321 "bdev_raid": { 00:18:38.321 "mask": "0x20000", 00:18:38.321 "tpoint_mask": "0x0" 00:18:38.321 }, 00:18:38.321 "scheduler": { 00:18:38.321 "mask": "0x40000", 00:18:38.321 "tpoint_mask": "0x0" 00:18:38.321 } 00:18:38.321 }' 00:18:38.321 23:00:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:18:38.321 23:00:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:18:38.321 23:00:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:18:38.321 23:00:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:18:38.321 23:00:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:18:38.578 23:00:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:18:38.578 23:00:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:18:38.578 23:00:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:18:38.578 23:00:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:18:38.578 ************************************ 00:18:38.578 END TEST rpc_trace_cmd_test 00:18:38.578 ************************************ 00:18:38.578 23:00:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:18:38.578 00:18:38.578 real 0m0.164s 00:18:38.578 user 0m0.129s 00:18:38.578 sys 0m0.025s 00:18:38.578 23:00:16 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:38.578 23:00:16 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:18:38.578 23:00:16 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:18:38.578 23:00:16 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:18:38.578 23:00:16 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:18:38.578 23:00:16 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:38.578 23:00:16 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:38.578 23:00:16 rpc -- common/autotest_common.sh@10 -- # set +x 00:18:38.578 ************************************ 00:18:38.578 START TEST rpc_daemon_integrity 00:18:38.578 ************************************ 00:18:38.578 23:00:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:18:38.578 23:00:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:18:38.578 23:00:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.578 23:00:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:18:38.578 23:00:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.578 23:00:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:18:38.578 23:00:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:18:38.578 23:00:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:18:38.578 23:00:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:18:38.578 23:00:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.578 23:00:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:18:38.578 23:00:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.578 23:00:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:18:38.578 23:00:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:18:38.578 23:00:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.578 23:00:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:18:38.578 23:00:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.578 23:00:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:18:38.578 { 00:18:38.578 "name": "Malloc2", 00:18:38.578 "aliases": [ 00:18:38.578 "e32efb27-ed3f-478e-8204-e0781fab4a85" 00:18:38.578 ], 00:18:38.578 "product_name": "Malloc disk", 00:18:38.578 "block_size": 512, 00:18:38.578 "num_blocks": 16384, 00:18:38.578 "uuid": "e32efb27-ed3f-478e-8204-e0781fab4a85", 00:18:38.578 "assigned_rate_limits": { 00:18:38.578 "rw_ios_per_sec": 0, 00:18:38.578 "rw_mbytes_per_sec": 0, 00:18:38.578 "r_mbytes_per_sec": 0, 00:18:38.578 "w_mbytes_per_sec": 0 00:18:38.578 }, 00:18:38.578 "claimed": false, 00:18:38.578 "zoned": false, 00:18:38.578 "supported_io_types": { 00:18:38.578 "read": true, 00:18:38.578 "write": true, 00:18:38.578 "unmap": true, 00:18:38.578 "flush": true, 00:18:38.579 "reset": true, 00:18:38.579 "nvme_admin": false, 00:18:38.579 "nvme_io": false, 00:18:38.579 "nvme_io_md": false, 00:18:38.579 "write_zeroes": true, 00:18:38.579 "zcopy": true, 00:18:38.579 "get_zone_info": false, 00:18:38.579 "zone_management": false, 00:18:38.579 "zone_append": false, 00:18:38.579 "compare": false, 00:18:38.579 "compare_and_write": false, 00:18:38.579 "abort": true, 00:18:38.579 "seek_hole": false, 00:18:38.579 "seek_data": false, 00:18:38.579 "copy": true, 00:18:38.579 "nvme_iov_md": false 00:18:38.579 }, 00:18:38.579 "memory_domains": [ 00:18:38.579 { 00:18:38.579 "dma_device_id": "system", 00:18:38.579 "dma_device_type": 1 00:18:38.579 }, 00:18:38.579 { 00:18:38.579 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:38.579 "dma_device_type": 2 00:18:38.579 } 00:18:38.579 ], 00:18:38.579 "driver_specific": {} 00:18:38.579 } 00:18:38.579 ]' 00:18:38.579 23:00:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:18:38.579 23:00:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:18:38.579 23:00:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:18:38.579 23:00:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.579 23:00:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:18:38.579 [2024-12-09 23:00:17.009425] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:18:38.579 [2024-12-09 23:00:17.009584] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:38.579 [2024-12-09 23:00:17.009610] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:18:38.579 [2024-12-09 23:00:17.009622] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:38.579 [2024-12-09 23:00:17.011794] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:38.579 [2024-12-09 23:00:17.011833] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:18:38.579 Passthru0 00:18:38.579 23:00:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.579 23:00:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:18:38.579 23:00:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.579 23:00:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:18:38.579 23:00:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.579 23:00:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:18:38.579 { 00:18:38.579 "name": "Malloc2", 00:18:38.579 "aliases": [ 00:18:38.579 "e32efb27-ed3f-478e-8204-e0781fab4a85" 00:18:38.579 ], 00:18:38.579 "product_name": "Malloc disk", 00:18:38.579 "block_size": 512, 00:18:38.579 "num_blocks": 16384, 00:18:38.579 "uuid": "e32efb27-ed3f-478e-8204-e0781fab4a85", 00:18:38.579 "assigned_rate_limits": { 00:18:38.579 "rw_ios_per_sec": 0, 00:18:38.579 "rw_mbytes_per_sec": 0, 00:18:38.579 "r_mbytes_per_sec": 0, 00:18:38.579 "w_mbytes_per_sec": 0 00:18:38.579 }, 00:18:38.579 "claimed": true, 00:18:38.579 "claim_type": "exclusive_write", 00:18:38.579 "zoned": false, 00:18:38.579 "supported_io_types": { 00:18:38.579 "read": true, 00:18:38.579 "write": true, 00:18:38.579 "unmap": true, 00:18:38.579 "flush": true, 00:18:38.579 "reset": true, 00:18:38.579 "nvme_admin": false, 00:18:38.579 "nvme_io": false, 00:18:38.579 "nvme_io_md": false, 00:18:38.579 "write_zeroes": true, 00:18:38.579 "zcopy": true, 00:18:38.579 "get_zone_info": false, 00:18:38.579 "zone_management": false, 00:18:38.579 "zone_append": false, 00:18:38.579 "compare": false, 00:18:38.579 "compare_and_write": false, 00:18:38.579 "abort": true, 00:18:38.579 "seek_hole": false, 00:18:38.579 "seek_data": false, 00:18:38.579 "copy": true, 00:18:38.579 "nvme_iov_md": false 00:18:38.579 }, 00:18:38.579 "memory_domains": [ 00:18:38.579 { 00:18:38.579 "dma_device_id": "system", 00:18:38.579 "dma_device_type": 1 00:18:38.579 }, 00:18:38.579 { 00:18:38.579 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:38.579 "dma_device_type": 2 00:18:38.579 } 00:18:38.579 ], 00:18:38.579 "driver_specific": {} 00:18:38.579 }, 00:18:38.579 { 00:18:38.579 "name": "Passthru0", 00:18:38.579 "aliases": [ 00:18:38.579 "64be9c40-697e-50f0-9666-69557876e006" 00:18:38.579 ], 00:18:38.579 "product_name": "passthru", 00:18:38.579 "block_size": 512, 00:18:38.579 "num_blocks": 16384, 00:18:38.579 "uuid": "64be9c40-697e-50f0-9666-69557876e006", 00:18:38.579 "assigned_rate_limits": { 00:18:38.579 "rw_ios_per_sec": 0, 00:18:38.579 "rw_mbytes_per_sec": 0, 00:18:38.579 "r_mbytes_per_sec": 0, 00:18:38.579 "w_mbytes_per_sec": 0 00:18:38.579 }, 00:18:38.579 "claimed": false, 00:18:38.579 "zoned": false, 00:18:38.579 "supported_io_types": { 00:18:38.579 "read": true, 00:18:38.579 "write": true, 00:18:38.579 "unmap": true, 00:18:38.579 "flush": true, 00:18:38.579 "reset": true, 00:18:38.579 "nvme_admin": false, 00:18:38.579 "nvme_io": false, 00:18:38.579 "nvme_io_md": false, 00:18:38.579 "write_zeroes": true, 00:18:38.579 "zcopy": true, 00:18:38.579 "get_zone_info": false, 00:18:38.579 "zone_management": false, 00:18:38.579 "zone_append": false, 00:18:38.579 "compare": false, 00:18:38.579 "compare_and_write": false, 00:18:38.579 "abort": true, 00:18:38.579 "seek_hole": false, 00:18:38.579 "seek_data": false, 00:18:38.579 "copy": true, 00:18:38.579 "nvme_iov_md": false 00:18:38.579 }, 00:18:38.579 "memory_domains": [ 00:18:38.579 { 00:18:38.579 "dma_device_id": "system", 00:18:38.579 "dma_device_type": 1 00:18:38.579 }, 00:18:38.579 { 00:18:38.579 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:38.579 "dma_device_type": 2 00:18:38.579 } 00:18:38.579 ], 00:18:38.579 "driver_specific": { 00:18:38.579 "passthru": { 00:18:38.579 "name": "Passthru0", 00:18:38.579 "base_bdev_name": "Malloc2" 00:18:38.579 } 00:18:38.579 } 00:18:38.579 } 00:18:38.579 ]' 00:18:38.837 23:00:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:18:38.837 23:00:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:18:38.837 23:00:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:18:38.837 23:00:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.837 23:00:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:18:38.837 23:00:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.837 23:00:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:18:38.837 23:00:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.837 23:00:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:18:38.837 23:00:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.837 23:00:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:18:38.837 23:00:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.837 23:00:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:18:38.837 23:00:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.837 23:00:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:18:38.837 23:00:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:18:38.837 ************************************ 00:18:38.837 END TEST rpc_daemon_integrity 00:18:38.837 ************************************ 00:18:38.837 23:00:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:18:38.837 00:18:38.837 real 0m0.240s 00:18:38.837 user 0m0.121s 00:18:38.837 sys 0m0.033s 00:18:38.837 23:00:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:38.837 23:00:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:18:38.837 23:00:17 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:18:38.837 23:00:17 rpc -- rpc/rpc.sh@84 -- # killprocess 57133 00:18:38.837 23:00:17 rpc -- common/autotest_common.sh@954 -- # '[' -z 57133 ']' 00:18:38.837 23:00:17 rpc -- common/autotest_common.sh@958 -- # kill -0 57133 00:18:38.837 23:00:17 rpc -- common/autotest_common.sh@959 -- # uname 00:18:38.837 23:00:17 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:38.837 23:00:17 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57133 00:18:38.837 killing process with pid 57133 00:18:38.837 23:00:17 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:38.837 23:00:17 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:38.837 23:00:17 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57133' 00:18:38.837 23:00:17 rpc -- common/autotest_common.sh@973 -- # kill 57133 00:18:38.837 23:00:17 rpc -- common/autotest_common.sh@978 -- # wait 57133 00:18:40.736 ************************************ 00:18:40.736 END TEST rpc 00:18:40.736 ************************************ 00:18:40.736 00:18:40.736 real 0m3.599s 00:18:40.736 user 0m4.052s 00:18:40.736 sys 0m0.572s 00:18:40.736 23:00:18 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:40.736 23:00:18 rpc -- common/autotest_common.sh@10 -- # set +x 00:18:40.736 23:00:18 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:18:40.736 23:00:18 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:40.736 23:00:18 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:40.736 23:00:18 -- common/autotest_common.sh@10 -- # set +x 00:18:40.736 ************************************ 00:18:40.736 START TEST skip_rpc 00:18:40.736 ************************************ 00:18:40.736 23:00:18 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:18:40.736 * Looking for test storage... 00:18:40.736 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:18:40.736 23:00:18 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:40.736 23:00:18 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:18:40.736 23:00:18 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:40.736 23:00:18 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:40.736 23:00:18 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:40.736 23:00:18 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:40.736 23:00:18 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:40.736 23:00:18 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:18:40.736 23:00:18 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:18:40.736 23:00:18 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:18:40.736 23:00:18 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:18:40.736 23:00:18 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:18:40.736 23:00:18 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:18:40.736 23:00:18 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:18:40.736 23:00:18 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:40.736 23:00:18 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:18:40.736 23:00:18 skip_rpc -- scripts/common.sh@345 -- # : 1 00:18:40.736 23:00:18 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:40.736 23:00:18 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:40.736 23:00:18 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:18:40.736 23:00:18 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:18:40.736 23:00:18 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:40.736 23:00:18 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:18:40.736 23:00:18 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:18:40.736 23:00:18 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:18:40.736 23:00:18 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:18:40.736 23:00:18 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:40.736 23:00:18 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:18:40.736 23:00:18 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:18:40.736 23:00:18 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:40.736 23:00:18 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:40.736 23:00:18 skip_rpc -- scripts/common.sh@368 -- # return 0 00:18:40.736 23:00:18 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:40.736 23:00:18 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:40.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:40.736 --rc genhtml_branch_coverage=1 00:18:40.736 --rc genhtml_function_coverage=1 00:18:40.736 --rc genhtml_legend=1 00:18:40.736 --rc geninfo_all_blocks=1 00:18:40.736 --rc geninfo_unexecuted_blocks=1 00:18:40.736 00:18:40.736 ' 00:18:40.736 23:00:18 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:40.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:40.736 --rc genhtml_branch_coverage=1 00:18:40.736 --rc genhtml_function_coverage=1 00:18:40.736 --rc genhtml_legend=1 00:18:40.736 --rc geninfo_all_blocks=1 00:18:40.736 --rc geninfo_unexecuted_blocks=1 00:18:40.736 00:18:40.736 ' 00:18:40.736 23:00:18 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:40.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:40.736 --rc genhtml_branch_coverage=1 00:18:40.736 --rc genhtml_function_coverage=1 00:18:40.736 --rc genhtml_legend=1 00:18:40.736 --rc geninfo_all_blocks=1 00:18:40.736 --rc geninfo_unexecuted_blocks=1 00:18:40.736 00:18:40.736 ' 00:18:40.736 23:00:18 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:40.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:40.736 --rc genhtml_branch_coverage=1 00:18:40.736 --rc genhtml_function_coverage=1 00:18:40.736 --rc genhtml_legend=1 00:18:40.736 --rc geninfo_all_blocks=1 00:18:40.736 --rc geninfo_unexecuted_blocks=1 00:18:40.736 00:18:40.736 ' 00:18:40.736 23:00:18 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:18:40.736 23:00:18 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:18:40.736 23:00:18 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:18:40.736 23:00:18 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:40.736 23:00:18 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:40.736 23:00:18 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:40.736 ************************************ 00:18:40.736 START TEST skip_rpc 00:18:40.736 ************************************ 00:18:40.736 23:00:18 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:18:40.736 23:00:18 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57346 00:18:40.736 23:00:18 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:18:40.736 23:00:18 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:18:40.736 23:00:18 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:18:40.736 [2024-12-09 23:00:18.987230] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:18:40.736 [2024-12-09 23:00:18.987356] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57346 ] 00:18:40.736 [2024-12-09 23:00:19.153351] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:40.995 [2024-12-09 23:00:19.256263] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:46.298 23:00:23 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:18:46.298 23:00:23 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:18:46.298 23:00:23 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:18:46.298 23:00:23 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:46.298 23:00:23 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:46.298 23:00:23 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:46.298 23:00:23 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:46.298 23:00:23 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:18:46.299 23:00:23 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.299 23:00:23 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:46.299 23:00:23 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:46.299 23:00:23 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:18:46.299 23:00:23 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:46.299 23:00:23 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:46.299 23:00:23 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:46.299 23:00:23 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:18:46.299 23:00:23 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57346 00:18:46.299 23:00:23 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 57346 ']' 00:18:46.299 23:00:23 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 57346 00:18:46.299 23:00:23 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:18:46.299 23:00:23 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:46.299 23:00:23 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57346 00:18:46.299 killing process with pid 57346 00:18:46.299 23:00:23 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:46.299 23:00:23 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:46.299 23:00:23 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57346' 00:18:46.299 23:00:23 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 57346 00:18:46.299 23:00:23 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 57346 00:18:46.863 00:18:46.863 real 0m6.264s 00:18:46.863 user 0m5.867s 00:18:46.863 sys 0m0.292s 00:18:46.863 ************************************ 00:18:46.863 END TEST skip_rpc 00:18:46.863 ************************************ 00:18:46.863 23:00:25 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:46.863 23:00:25 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:46.863 23:00:25 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:18:46.863 23:00:25 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:46.863 23:00:25 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:46.863 23:00:25 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:46.863 ************************************ 00:18:46.863 START TEST skip_rpc_with_json 00:18:46.863 ************************************ 00:18:46.863 23:00:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:18:46.863 23:00:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:18:46.863 23:00:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57444 00:18:46.863 23:00:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:18:46.863 23:00:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:18:46.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:46.863 23:00:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57444 00:18:46.863 23:00:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 57444 ']' 00:18:46.863 23:00:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:46.863 23:00:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:46.863 23:00:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:46.863 23:00:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:46.863 23:00:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:18:47.119 [2024-12-09 23:00:25.323572] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:18:47.119 [2024-12-09 23:00:25.323694] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57444 ] 00:18:47.119 [2024-12-09 23:00:25.480395] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:47.119 [2024-12-09 23:00:25.568296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:47.708 23:00:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:47.708 23:00:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:18:47.708 23:00:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:18:47.708 23:00:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.708 23:00:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:18:47.965 [2024-12-09 23:00:26.172749] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:18:47.965 request: 00:18:47.965 { 00:18:47.965 "trtype": "tcp", 00:18:47.965 "method": "nvmf_get_transports", 00:18:47.965 "req_id": 1 00:18:47.965 } 00:18:47.965 Got JSON-RPC error response 00:18:47.965 response: 00:18:47.965 { 00:18:47.965 "code": -19, 00:18:47.965 "message": "No such device" 00:18:47.965 } 00:18:47.965 23:00:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:47.965 23:00:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:18:47.965 23:00:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.965 23:00:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:18:47.965 [2024-12-09 23:00:26.180853] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:47.965 23:00:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.965 23:00:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:18:47.965 23:00:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.965 23:00:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:18:47.965 23:00:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.965 23:00:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:18:47.965 { 00:18:47.965 "subsystems": [ 00:18:47.965 { 00:18:47.965 "subsystem": "fsdev", 00:18:47.965 "config": [ 00:18:47.965 { 00:18:47.965 "method": "fsdev_set_opts", 00:18:47.965 "params": { 00:18:47.965 "fsdev_io_pool_size": 65535, 00:18:47.965 "fsdev_io_cache_size": 256 00:18:47.965 } 00:18:47.965 } 00:18:47.965 ] 00:18:47.965 }, 00:18:47.965 { 00:18:47.965 "subsystem": "keyring", 00:18:47.965 "config": [] 00:18:47.965 }, 00:18:47.965 { 00:18:47.965 "subsystem": "iobuf", 00:18:47.965 "config": [ 00:18:47.965 { 00:18:47.965 "method": "iobuf_set_options", 00:18:47.965 "params": { 00:18:47.965 "small_pool_count": 8192, 00:18:47.965 "large_pool_count": 1024, 00:18:47.965 "small_bufsize": 8192, 00:18:47.965 "large_bufsize": 135168, 00:18:47.965 "enable_numa": false 00:18:47.965 } 00:18:47.965 } 00:18:47.965 ] 00:18:47.965 }, 00:18:47.965 { 00:18:47.965 "subsystem": "sock", 00:18:47.965 "config": [ 00:18:47.965 { 00:18:47.965 "method": "sock_set_default_impl", 00:18:47.965 "params": { 00:18:47.965 "impl_name": "posix" 00:18:47.965 } 00:18:47.965 }, 00:18:47.965 { 00:18:47.965 "method": "sock_impl_set_options", 00:18:47.965 "params": { 00:18:47.965 "impl_name": "ssl", 00:18:47.965 "recv_buf_size": 4096, 00:18:47.965 "send_buf_size": 4096, 00:18:47.965 "enable_recv_pipe": true, 00:18:47.965 "enable_quickack": false, 00:18:47.965 "enable_placement_id": 0, 00:18:47.965 "enable_zerocopy_send_server": true, 00:18:47.965 "enable_zerocopy_send_client": false, 00:18:47.965 "zerocopy_threshold": 0, 00:18:47.965 "tls_version": 0, 00:18:47.965 "enable_ktls": false 00:18:47.965 } 00:18:47.965 }, 00:18:47.965 { 00:18:47.965 "method": "sock_impl_set_options", 00:18:47.965 "params": { 00:18:47.965 "impl_name": "posix", 00:18:47.965 "recv_buf_size": 2097152, 00:18:47.965 "send_buf_size": 2097152, 00:18:47.965 "enable_recv_pipe": true, 00:18:47.965 "enable_quickack": false, 00:18:47.965 "enable_placement_id": 0, 00:18:47.965 "enable_zerocopy_send_server": true, 00:18:47.965 "enable_zerocopy_send_client": false, 00:18:47.965 "zerocopy_threshold": 0, 00:18:47.965 "tls_version": 0, 00:18:47.965 "enable_ktls": false 00:18:47.965 } 00:18:47.965 } 00:18:47.965 ] 00:18:47.965 }, 00:18:47.965 { 00:18:47.965 "subsystem": "vmd", 00:18:47.965 "config": [] 00:18:47.965 }, 00:18:47.965 { 00:18:47.965 "subsystem": "accel", 00:18:47.965 "config": [ 00:18:47.965 { 00:18:47.965 "method": "accel_set_options", 00:18:47.965 "params": { 00:18:47.965 "small_cache_size": 128, 00:18:47.965 "large_cache_size": 16, 00:18:47.965 "task_count": 2048, 00:18:47.965 "sequence_count": 2048, 00:18:47.965 "buf_count": 2048 00:18:47.965 } 00:18:47.965 } 00:18:47.965 ] 00:18:47.965 }, 00:18:47.965 { 00:18:47.965 "subsystem": "bdev", 00:18:47.965 "config": [ 00:18:47.965 { 00:18:47.965 "method": "bdev_set_options", 00:18:47.965 "params": { 00:18:47.965 "bdev_io_pool_size": 65535, 00:18:47.965 "bdev_io_cache_size": 256, 00:18:47.965 "bdev_auto_examine": true, 00:18:47.965 "iobuf_small_cache_size": 128, 00:18:47.965 "iobuf_large_cache_size": 16 00:18:47.965 } 00:18:47.965 }, 00:18:47.965 { 00:18:47.965 "method": "bdev_raid_set_options", 00:18:47.965 "params": { 00:18:47.965 "process_window_size_kb": 1024, 00:18:47.965 "process_max_bandwidth_mb_sec": 0 00:18:47.965 } 00:18:47.965 }, 00:18:47.965 { 00:18:47.965 "method": "bdev_iscsi_set_options", 00:18:47.965 "params": { 00:18:47.965 "timeout_sec": 30 00:18:47.965 } 00:18:47.965 }, 00:18:47.965 { 00:18:47.965 "method": "bdev_nvme_set_options", 00:18:47.965 "params": { 00:18:47.965 "action_on_timeout": "none", 00:18:47.965 "timeout_us": 0, 00:18:47.965 "timeout_admin_us": 0, 00:18:47.965 "keep_alive_timeout_ms": 10000, 00:18:47.965 "arbitration_burst": 0, 00:18:47.965 "low_priority_weight": 0, 00:18:47.965 "medium_priority_weight": 0, 00:18:47.965 "high_priority_weight": 0, 00:18:47.965 "nvme_adminq_poll_period_us": 10000, 00:18:47.965 "nvme_ioq_poll_period_us": 0, 00:18:47.965 "io_queue_requests": 0, 00:18:47.965 "delay_cmd_submit": true, 00:18:47.965 "transport_retry_count": 4, 00:18:47.965 "bdev_retry_count": 3, 00:18:47.965 "transport_ack_timeout": 0, 00:18:47.965 "ctrlr_loss_timeout_sec": 0, 00:18:47.965 "reconnect_delay_sec": 0, 00:18:47.965 "fast_io_fail_timeout_sec": 0, 00:18:47.965 "disable_auto_failback": false, 00:18:47.965 "generate_uuids": false, 00:18:47.965 "transport_tos": 0, 00:18:47.965 "nvme_error_stat": false, 00:18:47.965 "rdma_srq_size": 0, 00:18:47.965 "io_path_stat": false, 00:18:47.965 "allow_accel_sequence": false, 00:18:47.965 "rdma_max_cq_size": 0, 00:18:47.965 "rdma_cm_event_timeout_ms": 0, 00:18:47.965 "dhchap_digests": [ 00:18:47.965 "sha256", 00:18:47.965 "sha384", 00:18:47.965 "sha512" 00:18:47.965 ], 00:18:47.965 "dhchap_dhgroups": [ 00:18:47.965 "null", 00:18:47.965 "ffdhe2048", 00:18:47.965 "ffdhe3072", 00:18:47.965 "ffdhe4096", 00:18:47.965 "ffdhe6144", 00:18:47.965 "ffdhe8192" 00:18:47.965 ] 00:18:47.965 } 00:18:47.965 }, 00:18:47.965 { 00:18:47.965 "method": "bdev_nvme_set_hotplug", 00:18:47.965 "params": { 00:18:47.965 "period_us": 100000, 00:18:47.965 "enable": false 00:18:47.965 } 00:18:47.965 }, 00:18:47.965 { 00:18:47.965 "method": "bdev_wait_for_examine" 00:18:47.965 } 00:18:47.965 ] 00:18:47.965 }, 00:18:47.965 { 00:18:47.965 "subsystem": "scsi", 00:18:47.965 "config": null 00:18:47.965 }, 00:18:47.965 { 00:18:47.965 "subsystem": "scheduler", 00:18:47.965 "config": [ 00:18:47.965 { 00:18:47.965 "method": "framework_set_scheduler", 00:18:47.965 "params": { 00:18:47.965 "name": "static" 00:18:47.965 } 00:18:47.965 } 00:18:47.965 ] 00:18:47.965 }, 00:18:47.965 { 00:18:47.965 "subsystem": "vhost_scsi", 00:18:47.965 "config": [] 00:18:47.965 }, 00:18:47.965 { 00:18:47.965 "subsystem": "vhost_blk", 00:18:47.965 "config": [] 00:18:47.965 }, 00:18:47.965 { 00:18:47.965 "subsystem": "ublk", 00:18:47.965 "config": [] 00:18:47.965 }, 00:18:47.965 { 00:18:47.965 "subsystem": "nbd", 00:18:47.965 "config": [] 00:18:47.965 }, 00:18:47.965 { 00:18:47.965 "subsystem": "nvmf", 00:18:47.965 "config": [ 00:18:47.965 { 00:18:47.965 "method": "nvmf_set_config", 00:18:47.965 "params": { 00:18:47.965 "discovery_filter": "match_any", 00:18:47.965 "admin_cmd_passthru": { 00:18:47.965 "identify_ctrlr": false 00:18:47.965 }, 00:18:47.965 "dhchap_digests": [ 00:18:47.965 "sha256", 00:18:47.965 "sha384", 00:18:47.965 "sha512" 00:18:47.965 ], 00:18:47.965 "dhchap_dhgroups": [ 00:18:47.965 "null", 00:18:47.965 "ffdhe2048", 00:18:47.965 "ffdhe3072", 00:18:47.965 "ffdhe4096", 00:18:47.965 "ffdhe6144", 00:18:47.965 "ffdhe8192" 00:18:47.965 ] 00:18:47.965 } 00:18:47.965 }, 00:18:47.965 { 00:18:47.965 "method": "nvmf_set_max_subsystems", 00:18:47.965 "params": { 00:18:47.965 "max_subsystems": 1024 00:18:47.965 } 00:18:47.965 }, 00:18:47.965 { 00:18:47.965 "method": "nvmf_set_crdt", 00:18:47.965 "params": { 00:18:47.965 "crdt1": 0, 00:18:47.965 "crdt2": 0, 00:18:47.965 "crdt3": 0 00:18:47.965 } 00:18:47.965 }, 00:18:47.965 { 00:18:47.965 "method": "nvmf_create_transport", 00:18:47.965 "params": { 00:18:47.965 "trtype": "TCP", 00:18:47.965 "max_queue_depth": 128, 00:18:47.965 "max_io_qpairs_per_ctrlr": 127, 00:18:47.965 "in_capsule_data_size": 4096, 00:18:47.965 "max_io_size": 131072, 00:18:47.965 "io_unit_size": 131072, 00:18:47.965 "max_aq_depth": 128, 00:18:47.965 "num_shared_buffers": 511, 00:18:47.965 "buf_cache_size": 4294967295, 00:18:47.965 "dif_insert_or_strip": false, 00:18:47.965 "zcopy": false, 00:18:47.965 "c2h_success": true, 00:18:47.965 "sock_priority": 0, 00:18:47.965 "abort_timeout_sec": 1, 00:18:47.965 "ack_timeout": 0, 00:18:47.965 "data_wr_pool_size": 0 00:18:47.965 } 00:18:47.965 } 00:18:47.965 ] 00:18:47.965 }, 00:18:47.965 { 00:18:47.965 "subsystem": "iscsi", 00:18:47.965 "config": [ 00:18:47.965 { 00:18:47.965 "method": "iscsi_set_options", 00:18:47.965 "params": { 00:18:47.965 "node_base": "iqn.2016-06.io.spdk", 00:18:47.965 "max_sessions": 128, 00:18:47.965 "max_connections_per_session": 2, 00:18:47.965 "max_queue_depth": 64, 00:18:47.965 "default_time2wait": 2, 00:18:47.965 "default_time2retain": 20, 00:18:47.966 "first_burst_length": 8192, 00:18:47.966 "immediate_data": true, 00:18:47.966 "allow_duplicated_isid": false, 00:18:47.966 "error_recovery_level": 0, 00:18:47.966 "nop_timeout": 60, 00:18:47.966 "nop_in_interval": 30, 00:18:47.966 "disable_chap": false, 00:18:47.966 "require_chap": false, 00:18:47.966 "mutual_chap": false, 00:18:47.966 "chap_group": 0, 00:18:47.966 "max_large_datain_per_connection": 64, 00:18:47.966 "max_r2t_per_connection": 4, 00:18:47.966 "pdu_pool_size": 36864, 00:18:47.966 "immediate_data_pool_size": 16384, 00:18:47.966 "data_out_pool_size": 2048 00:18:47.966 } 00:18:47.966 } 00:18:47.966 ] 00:18:47.966 } 00:18:47.966 ] 00:18:47.966 } 00:18:47.966 23:00:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:18:47.966 23:00:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57444 00:18:47.966 23:00:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57444 ']' 00:18:47.966 23:00:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57444 00:18:47.966 23:00:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:18:47.966 23:00:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:47.966 23:00:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57444 00:18:47.966 killing process with pid 57444 00:18:47.966 23:00:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:47.966 23:00:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:47.966 23:00:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57444' 00:18:47.966 23:00:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57444 00:18:47.966 23:00:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57444 00:18:49.337 23:00:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57478 00:18:49.337 23:00:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:18:49.337 23:00:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:18:54.665 23:00:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57478 00:18:54.665 23:00:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57478 ']' 00:18:54.665 23:00:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57478 00:18:54.665 23:00:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:18:54.665 23:00:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:54.665 23:00:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57478 00:18:54.665 killing process with pid 57478 00:18:54.665 23:00:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:54.665 23:00:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:54.665 23:00:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57478' 00:18:54.665 23:00:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57478 00:18:54.665 23:00:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57478 00:18:55.602 23:00:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:18:55.602 23:00:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:18:55.602 ************************************ 00:18:55.602 END TEST skip_rpc_with_json 00:18:55.602 ************************************ 00:18:55.602 00:18:55.602 real 0m8.650s 00:18:55.602 user 0m8.224s 00:18:55.602 sys 0m0.651s 00:18:55.602 23:00:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:55.602 23:00:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:18:55.602 23:00:33 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:18:55.602 23:00:33 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:55.602 23:00:33 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:55.602 23:00:33 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:55.602 ************************************ 00:18:55.602 START TEST skip_rpc_with_delay 00:18:55.602 ************************************ 00:18:55.602 23:00:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:18:55.602 23:00:33 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:18:55.602 23:00:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:18:55.602 23:00:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:18:55.602 23:00:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:55.602 23:00:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:55.602 23:00:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:55.602 23:00:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:55.602 23:00:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:55.602 23:00:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:55.602 23:00:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:55.602 23:00:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:18:55.602 23:00:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:18:55.602 [2024-12-09 23:00:34.037673] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:18:55.863 23:00:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:18:55.863 23:00:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:55.863 ************************************ 00:18:55.863 END TEST skip_rpc_with_delay 00:18:55.863 ************************************ 00:18:55.863 23:00:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:55.863 23:00:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:55.863 00:18:55.863 real 0m0.128s 00:18:55.863 user 0m0.066s 00:18:55.863 sys 0m0.060s 00:18:55.863 23:00:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:55.863 23:00:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:18:55.863 23:00:34 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:18:55.863 23:00:34 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:18:55.863 23:00:34 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:18:55.863 23:00:34 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:55.863 23:00:34 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:55.863 23:00:34 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:55.863 ************************************ 00:18:55.863 START TEST exit_on_failed_rpc_init 00:18:55.863 ************************************ 00:18:55.863 23:00:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:18:55.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:55.863 23:00:34 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57601 00:18:55.863 23:00:34 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57601 00:18:55.863 23:00:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 57601 ']' 00:18:55.863 23:00:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:55.863 23:00:34 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:18:55.863 23:00:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:55.863 23:00:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:55.863 23:00:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:55.863 23:00:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:18:55.863 [2024-12-09 23:00:34.253170] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:18:55.863 [2024-12-09 23:00:34.253386] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57601 ] 00:18:56.124 [2024-12-09 23:00:34.429194] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:56.124 [2024-12-09 23:00:34.539594] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:56.698 23:00:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:56.698 23:00:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:18:56.698 23:00:35 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:18:56.698 23:00:35 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:18:56.698 23:00:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:18:56.698 23:00:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:18:56.698 23:00:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:56.698 23:00:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:56.698 23:00:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:56.698 23:00:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:56.699 23:00:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:56.699 23:00:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:56.699 23:00:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:56.699 23:00:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:18:56.699 23:00:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:18:56.966 [2024-12-09 23:00:35.212111] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:18:56.966 [2024-12-09 23:00:35.212410] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57619 ] 00:18:56.966 [2024-12-09 23:00:35.372812] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:57.258 [2024-12-09 23:00:35.503420] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:57.258 [2024-12-09 23:00:35.503530] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:18:57.258 [2024-12-09 23:00:35.503544] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:18:57.258 [2024-12-09 23:00:35.503561] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:18:57.258 23:00:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:18:57.258 23:00:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:57.258 23:00:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:18:57.258 23:00:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:18:57.258 23:00:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:18:57.258 23:00:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:57.258 23:00:35 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:18:57.258 23:00:35 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57601 00:18:57.258 23:00:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 57601 ']' 00:18:57.258 23:00:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 57601 00:18:57.258 23:00:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:18:57.258 23:00:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:57.258 23:00:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57601 00:18:57.518 killing process with pid 57601 00:18:57.518 23:00:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:57.518 23:00:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:57.518 23:00:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57601' 00:18:57.518 23:00:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 57601 00:18:57.518 23:00:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 57601 00:18:58.931 ************************************ 00:18:58.931 END TEST exit_on_failed_rpc_init 00:18:58.931 ************************************ 00:18:58.931 00:18:58.931 real 0m3.087s 00:18:58.931 user 0m3.472s 00:18:58.931 sys 0m0.430s 00:18:58.931 23:00:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:58.931 23:00:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:18:58.931 23:00:37 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:18:58.931 00:18:58.931 real 0m18.511s 00:18:58.931 user 0m17.779s 00:18:58.931 sys 0m1.611s 00:18:58.931 23:00:37 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:58.931 23:00:37 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:58.931 ************************************ 00:18:58.931 END TEST skip_rpc 00:18:58.931 ************************************ 00:18:58.931 23:00:37 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:18:58.931 23:00:37 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:58.931 23:00:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:58.931 23:00:37 -- common/autotest_common.sh@10 -- # set +x 00:18:58.931 ************************************ 00:18:58.931 START TEST rpc_client 00:18:58.931 ************************************ 00:18:58.931 23:00:37 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:18:58.931 * Looking for test storage... 00:18:58.931 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:18:58.931 23:00:37 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:58.931 23:00:37 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:18:58.931 23:00:37 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:59.193 23:00:37 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:59.193 23:00:37 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:59.193 23:00:37 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:59.193 23:00:37 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:59.193 23:00:37 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:18:59.193 23:00:37 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:18:59.193 23:00:37 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:18:59.193 23:00:37 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:18:59.193 23:00:37 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:18:59.193 23:00:37 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:18:59.193 23:00:37 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:18:59.193 23:00:37 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:59.193 23:00:37 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:18:59.193 23:00:37 rpc_client -- scripts/common.sh@345 -- # : 1 00:18:59.193 23:00:37 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:59.193 23:00:37 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:59.193 23:00:37 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:18:59.193 23:00:37 rpc_client -- scripts/common.sh@353 -- # local d=1 00:18:59.193 23:00:37 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:59.193 23:00:37 rpc_client -- scripts/common.sh@355 -- # echo 1 00:18:59.193 23:00:37 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:18:59.193 23:00:37 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:18:59.193 23:00:37 rpc_client -- scripts/common.sh@353 -- # local d=2 00:18:59.193 23:00:37 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:59.193 23:00:37 rpc_client -- scripts/common.sh@355 -- # echo 2 00:18:59.193 23:00:37 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:18:59.193 23:00:37 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:59.193 23:00:37 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:59.193 23:00:37 rpc_client -- scripts/common.sh@368 -- # return 0 00:18:59.193 23:00:37 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:59.193 23:00:37 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:59.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:59.193 --rc genhtml_branch_coverage=1 00:18:59.193 --rc genhtml_function_coverage=1 00:18:59.193 --rc genhtml_legend=1 00:18:59.193 --rc geninfo_all_blocks=1 00:18:59.193 --rc geninfo_unexecuted_blocks=1 00:18:59.193 00:18:59.193 ' 00:18:59.193 23:00:37 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:59.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:59.193 --rc genhtml_branch_coverage=1 00:18:59.193 --rc genhtml_function_coverage=1 00:18:59.193 --rc genhtml_legend=1 00:18:59.193 --rc geninfo_all_blocks=1 00:18:59.193 --rc geninfo_unexecuted_blocks=1 00:18:59.193 00:18:59.193 ' 00:18:59.193 23:00:37 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:59.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:59.193 --rc genhtml_branch_coverage=1 00:18:59.193 --rc genhtml_function_coverage=1 00:18:59.193 --rc genhtml_legend=1 00:18:59.193 --rc geninfo_all_blocks=1 00:18:59.193 --rc geninfo_unexecuted_blocks=1 00:18:59.193 00:18:59.193 ' 00:18:59.193 23:00:37 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:59.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:59.193 --rc genhtml_branch_coverage=1 00:18:59.193 --rc genhtml_function_coverage=1 00:18:59.193 --rc genhtml_legend=1 00:18:59.193 --rc geninfo_all_blocks=1 00:18:59.193 --rc geninfo_unexecuted_blocks=1 00:18:59.193 00:18:59.193 ' 00:18:59.193 23:00:37 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:18:59.193 OK 00:18:59.193 23:00:37 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:18:59.193 00:18:59.193 real 0m0.201s 00:18:59.193 user 0m0.121s 00:18:59.193 sys 0m0.085s 00:18:59.193 23:00:37 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:59.193 23:00:37 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:18:59.193 ************************************ 00:18:59.193 END TEST rpc_client 00:18:59.193 ************************************ 00:18:59.194 23:00:37 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:18:59.194 23:00:37 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:59.194 23:00:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:59.194 23:00:37 -- common/autotest_common.sh@10 -- # set +x 00:18:59.194 ************************************ 00:18:59.194 START TEST json_config 00:18:59.194 ************************************ 00:18:59.194 23:00:37 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:18:59.194 23:00:37 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:59.194 23:00:37 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:18:59.194 23:00:37 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:59.459 23:00:37 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:59.459 23:00:37 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:59.459 23:00:37 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:59.459 23:00:37 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:59.459 23:00:37 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:18:59.459 23:00:37 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:18:59.459 23:00:37 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:18:59.459 23:00:37 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:18:59.459 23:00:37 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:18:59.459 23:00:37 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:18:59.459 23:00:37 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:18:59.459 23:00:37 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:59.459 23:00:37 json_config -- scripts/common.sh@344 -- # case "$op" in 00:18:59.459 23:00:37 json_config -- scripts/common.sh@345 -- # : 1 00:18:59.459 23:00:37 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:59.459 23:00:37 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:59.459 23:00:37 json_config -- scripts/common.sh@365 -- # decimal 1 00:18:59.459 23:00:37 json_config -- scripts/common.sh@353 -- # local d=1 00:18:59.459 23:00:37 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:59.459 23:00:37 json_config -- scripts/common.sh@355 -- # echo 1 00:18:59.459 23:00:37 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:18:59.459 23:00:37 json_config -- scripts/common.sh@366 -- # decimal 2 00:18:59.459 23:00:37 json_config -- scripts/common.sh@353 -- # local d=2 00:18:59.459 23:00:37 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:59.459 23:00:37 json_config -- scripts/common.sh@355 -- # echo 2 00:18:59.459 23:00:37 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:18:59.459 23:00:37 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:59.459 23:00:37 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:59.459 23:00:37 json_config -- scripts/common.sh@368 -- # return 0 00:18:59.459 23:00:37 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:59.459 23:00:37 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:59.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:59.459 --rc genhtml_branch_coverage=1 00:18:59.459 --rc genhtml_function_coverage=1 00:18:59.459 --rc genhtml_legend=1 00:18:59.459 --rc geninfo_all_blocks=1 00:18:59.459 --rc geninfo_unexecuted_blocks=1 00:18:59.459 00:18:59.459 ' 00:18:59.459 23:00:37 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:59.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:59.459 --rc genhtml_branch_coverage=1 00:18:59.460 --rc genhtml_function_coverage=1 00:18:59.460 --rc genhtml_legend=1 00:18:59.460 --rc geninfo_all_blocks=1 00:18:59.460 --rc geninfo_unexecuted_blocks=1 00:18:59.460 00:18:59.460 ' 00:18:59.460 23:00:37 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:59.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:59.460 --rc genhtml_branch_coverage=1 00:18:59.460 --rc genhtml_function_coverage=1 00:18:59.460 --rc genhtml_legend=1 00:18:59.460 --rc geninfo_all_blocks=1 00:18:59.460 --rc geninfo_unexecuted_blocks=1 00:18:59.460 00:18:59.460 ' 00:18:59.460 23:00:37 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:59.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:59.460 --rc genhtml_branch_coverage=1 00:18:59.460 --rc genhtml_function_coverage=1 00:18:59.460 --rc genhtml_legend=1 00:18:59.460 --rc geninfo_all_blocks=1 00:18:59.460 --rc geninfo_unexecuted_blocks=1 00:18:59.460 00:18:59.460 ' 00:18:59.460 23:00:37 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:59.460 23:00:37 json_config -- nvmf/common.sh@7 -- # uname -s 00:18:59.460 23:00:37 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:59.460 23:00:37 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:59.460 23:00:37 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:59.460 23:00:37 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:59.460 23:00:37 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:59.460 23:00:37 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:59.460 23:00:37 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:59.460 23:00:37 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:59.460 23:00:37 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:59.460 23:00:37 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:59.460 23:00:37 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:aa33a105-beb8-4410-b0bb-bb954c91bba9 00:18:59.460 23:00:37 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=aa33a105-beb8-4410-b0bb-bb954c91bba9 00:18:59.460 23:00:37 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:59.460 23:00:37 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:59.460 23:00:37 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:18:59.460 23:00:37 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:59.460 23:00:37 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:59.460 23:00:37 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:18:59.460 23:00:37 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:59.460 23:00:37 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:59.460 23:00:37 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:59.460 23:00:37 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.460 23:00:37 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.460 23:00:37 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.460 23:00:37 json_config -- paths/export.sh@5 -- # export PATH 00:18:59.460 23:00:37 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.460 23:00:37 json_config -- nvmf/common.sh@51 -- # : 0 00:18:59.460 23:00:37 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:59.460 23:00:37 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:59.460 23:00:37 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:59.460 23:00:37 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:59.460 23:00:37 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:59.460 23:00:37 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:59.460 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:59.460 23:00:37 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:59.460 23:00:37 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:59.460 23:00:37 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:59.460 23:00:37 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:18:59.460 23:00:37 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:18:59.460 23:00:37 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:18:59.460 23:00:37 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:18:59.460 23:00:37 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:18:59.460 WARNING: No tests are enabled so not running JSON configuration tests 00:18:59.460 23:00:37 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:18:59.460 23:00:37 json_config -- json_config/json_config.sh@28 -- # exit 0 00:18:59.460 00:18:59.460 real 0m0.152s 00:18:59.460 user 0m0.099s 00:18:59.460 sys 0m0.052s 00:18:59.460 23:00:37 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:59.460 23:00:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:18:59.460 ************************************ 00:18:59.460 END TEST json_config 00:18:59.460 ************************************ 00:18:59.460 23:00:37 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:18:59.460 23:00:37 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:59.460 23:00:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:59.460 23:00:37 -- common/autotest_common.sh@10 -- # set +x 00:18:59.460 ************************************ 00:18:59.460 START TEST json_config_extra_key 00:18:59.460 ************************************ 00:18:59.460 23:00:37 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:18:59.460 23:00:37 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:59.461 23:00:37 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:18:59.461 23:00:37 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:59.461 23:00:37 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:59.461 23:00:37 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:59.461 23:00:37 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:59.461 23:00:37 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:59.461 23:00:37 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:18:59.461 23:00:37 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:18:59.461 23:00:37 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:18:59.461 23:00:37 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:18:59.461 23:00:37 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:18:59.461 23:00:37 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:18:59.461 23:00:37 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:18:59.461 23:00:37 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:59.461 23:00:37 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:18:59.461 23:00:37 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:18:59.461 23:00:37 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:59.461 23:00:37 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:59.461 23:00:37 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:18:59.461 23:00:37 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:18:59.461 23:00:37 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:59.461 23:00:37 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:18:59.461 23:00:37 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:18:59.461 23:00:37 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:18:59.461 23:00:37 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:18:59.461 23:00:37 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:59.461 23:00:37 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:18:59.461 23:00:37 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:18:59.461 23:00:37 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:59.461 23:00:37 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:59.461 23:00:37 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:18:59.461 23:00:37 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:59.461 23:00:37 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:59.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:59.461 --rc genhtml_branch_coverage=1 00:18:59.461 --rc genhtml_function_coverage=1 00:18:59.461 --rc genhtml_legend=1 00:18:59.461 --rc geninfo_all_blocks=1 00:18:59.461 --rc geninfo_unexecuted_blocks=1 00:18:59.461 00:18:59.461 ' 00:18:59.461 23:00:37 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:59.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:59.461 --rc genhtml_branch_coverage=1 00:18:59.461 --rc genhtml_function_coverage=1 00:18:59.461 --rc genhtml_legend=1 00:18:59.461 --rc geninfo_all_blocks=1 00:18:59.461 --rc geninfo_unexecuted_blocks=1 00:18:59.461 00:18:59.461 ' 00:18:59.461 23:00:37 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:59.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:59.461 --rc genhtml_branch_coverage=1 00:18:59.461 --rc genhtml_function_coverage=1 00:18:59.461 --rc genhtml_legend=1 00:18:59.461 --rc geninfo_all_blocks=1 00:18:59.461 --rc geninfo_unexecuted_blocks=1 00:18:59.461 00:18:59.461 ' 00:18:59.461 23:00:37 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:59.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:59.461 --rc genhtml_branch_coverage=1 00:18:59.461 --rc genhtml_function_coverage=1 00:18:59.461 --rc genhtml_legend=1 00:18:59.461 --rc geninfo_all_blocks=1 00:18:59.461 --rc geninfo_unexecuted_blocks=1 00:18:59.461 00:18:59.461 ' 00:18:59.461 23:00:37 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:59.461 23:00:37 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:18:59.724 23:00:37 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:59.724 23:00:37 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:59.724 23:00:37 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:59.724 23:00:37 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:59.724 23:00:37 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:59.724 23:00:37 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:59.725 23:00:37 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:59.725 23:00:37 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:59.725 23:00:37 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:59.725 23:00:37 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:59.725 23:00:37 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:aa33a105-beb8-4410-b0bb-bb954c91bba9 00:18:59.725 23:00:37 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=aa33a105-beb8-4410-b0bb-bb954c91bba9 00:18:59.725 23:00:37 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:59.725 23:00:37 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:59.725 23:00:37 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:18:59.725 23:00:37 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:59.725 23:00:37 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:59.725 23:00:37 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:18:59.725 23:00:37 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:59.725 23:00:37 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:59.725 23:00:37 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:59.725 23:00:37 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.725 23:00:37 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.725 23:00:37 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.725 23:00:37 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:18:59.725 23:00:37 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.725 23:00:37 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:18:59.725 23:00:37 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:59.725 23:00:37 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:59.725 23:00:37 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:59.725 23:00:37 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:59.725 23:00:37 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:59.725 23:00:37 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:59.725 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:59.725 23:00:37 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:59.725 23:00:37 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:59.725 23:00:37 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:59.725 23:00:37 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:18:59.725 23:00:37 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:18:59.725 23:00:37 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:18:59.725 23:00:37 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:18:59.725 23:00:37 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:18:59.725 23:00:37 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:18:59.725 23:00:37 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:18:59.725 23:00:37 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:18:59.725 23:00:37 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:18:59.725 23:00:37 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:18:59.725 23:00:37 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:18:59.725 INFO: launching applications... 00:18:59.725 23:00:37 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:18:59.725 23:00:37 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:18:59.725 23:00:37 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:18:59.725 23:00:37 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:18:59.725 23:00:37 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:18:59.725 23:00:37 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:18:59.725 23:00:37 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:18:59.725 23:00:37 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:18:59.725 23:00:37 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57812 00:18:59.725 23:00:37 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:18:59.725 Waiting for target to run... 00:18:59.725 23:00:37 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57812 /var/tmp/spdk_tgt.sock 00:18:59.725 23:00:37 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 57812 ']' 00:18:59.725 23:00:37 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:18:59.725 23:00:37 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:18:59.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:18:59.725 23:00:37 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:59.725 23:00:37 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:18:59.725 23:00:37 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:59.725 23:00:37 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:18:59.725 [2024-12-09 23:00:38.009899] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:18:59.725 [2024-12-09 23:00:38.010181] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57812 ] 00:18:59.986 [2024-12-09 23:00:38.335013] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:59.986 [2024-12-09 23:00:38.416210] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:00.555 00:19:00.555 INFO: shutting down applications... 00:19:00.555 23:00:38 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:00.555 23:00:38 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:19:00.555 23:00:38 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:19:00.555 23:00:38 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:19:00.555 23:00:38 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:19:00.555 23:00:38 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:19:00.555 23:00:38 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:19:00.555 23:00:38 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57812 ]] 00:19:00.555 23:00:38 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57812 00:19:00.555 23:00:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:19:00.555 23:00:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:19:00.555 23:00:38 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57812 00:19:00.555 23:00:38 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:19:01.126 23:00:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:19:01.126 23:00:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:19:01.126 23:00:39 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57812 00:19:01.126 23:00:39 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:19:01.699 23:00:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:19:01.699 23:00:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:19:01.699 23:00:39 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57812 00:19:01.699 23:00:39 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:19:01.959 23:00:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:19:01.959 23:00:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:19:01.959 23:00:40 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57812 00:19:01.959 SPDK target shutdown done 00:19:01.959 Success 00:19:01.959 23:00:40 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:19:01.959 23:00:40 json_config_extra_key -- json_config/common.sh@43 -- # break 00:19:01.959 23:00:40 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:19:01.959 23:00:40 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:19:01.959 23:00:40 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:19:01.959 ************************************ 00:19:01.959 END TEST json_config_extra_key 00:19:01.959 ************************************ 00:19:01.959 00:19:01.960 real 0m2.574s 00:19:01.960 user 0m2.357s 00:19:01.960 sys 0m0.400s 00:19:01.960 23:00:40 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:01.960 23:00:40 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:19:01.960 23:00:40 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:19:01.960 23:00:40 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:01.960 23:00:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:01.960 23:00:40 -- common/autotest_common.sh@10 -- # set +x 00:19:01.960 ************************************ 00:19:01.960 START TEST alias_rpc 00:19:01.960 ************************************ 00:19:01.960 23:00:40 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:19:02.219 * Looking for test storage... 00:19:02.219 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:19:02.219 23:00:40 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:02.219 23:00:40 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:19:02.219 23:00:40 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:02.220 23:00:40 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:02.220 23:00:40 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:02.220 23:00:40 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:02.220 23:00:40 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:02.220 23:00:40 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:19:02.220 23:00:40 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:19:02.220 23:00:40 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:19:02.220 23:00:40 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:19:02.220 23:00:40 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:19:02.220 23:00:40 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:19:02.220 23:00:40 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:19:02.220 23:00:40 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:02.220 23:00:40 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:19:02.220 23:00:40 alias_rpc -- scripts/common.sh@345 -- # : 1 00:19:02.220 23:00:40 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:02.220 23:00:40 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:02.220 23:00:40 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:19:02.220 23:00:40 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:19:02.220 23:00:40 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:02.220 23:00:40 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:19:02.220 23:00:40 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:19:02.220 23:00:40 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:19:02.220 23:00:40 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:19:02.220 23:00:40 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:02.220 23:00:40 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:19:02.220 23:00:40 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:19:02.220 23:00:40 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:02.220 23:00:40 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:02.220 23:00:40 alias_rpc -- scripts/common.sh@368 -- # return 0 00:19:02.220 23:00:40 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:02.220 23:00:40 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:02.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:02.220 --rc genhtml_branch_coverage=1 00:19:02.220 --rc genhtml_function_coverage=1 00:19:02.220 --rc genhtml_legend=1 00:19:02.220 --rc geninfo_all_blocks=1 00:19:02.220 --rc geninfo_unexecuted_blocks=1 00:19:02.220 00:19:02.220 ' 00:19:02.220 23:00:40 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:02.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:02.220 --rc genhtml_branch_coverage=1 00:19:02.220 --rc genhtml_function_coverage=1 00:19:02.220 --rc genhtml_legend=1 00:19:02.220 --rc geninfo_all_blocks=1 00:19:02.220 --rc geninfo_unexecuted_blocks=1 00:19:02.220 00:19:02.220 ' 00:19:02.220 23:00:40 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:02.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:02.220 --rc genhtml_branch_coverage=1 00:19:02.220 --rc genhtml_function_coverage=1 00:19:02.220 --rc genhtml_legend=1 00:19:02.220 --rc geninfo_all_blocks=1 00:19:02.220 --rc geninfo_unexecuted_blocks=1 00:19:02.220 00:19:02.220 ' 00:19:02.220 23:00:40 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:02.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:02.220 --rc genhtml_branch_coverage=1 00:19:02.220 --rc genhtml_function_coverage=1 00:19:02.220 --rc genhtml_legend=1 00:19:02.220 --rc geninfo_all_blocks=1 00:19:02.220 --rc geninfo_unexecuted_blocks=1 00:19:02.220 00:19:02.220 ' 00:19:02.220 23:00:40 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:19:02.220 23:00:40 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57899 00:19:02.220 23:00:40 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57899 00:19:02.220 23:00:40 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 57899 ']' 00:19:02.220 23:00:40 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:02.220 23:00:40 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:02.220 23:00:40 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:02.220 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:02.220 23:00:40 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:02.220 23:00:40 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:02.220 23:00:40 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:02.220 [2024-12-09 23:00:40.637582] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:19:02.220 [2024-12-09 23:00:40.637701] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57899 ] 00:19:02.481 [2024-12-09 23:00:40.797001] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:02.481 [2024-12-09 23:00:40.899442] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:03.055 23:00:41 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:03.055 23:00:41 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:19:03.055 23:00:41 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:19:03.316 23:00:41 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57899 00:19:03.316 23:00:41 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 57899 ']' 00:19:03.316 23:00:41 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 57899 00:19:03.316 23:00:41 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:19:03.316 23:00:41 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:03.316 23:00:41 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57899 00:19:03.316 killing process with pid 57899 00:19:03.316 23:00:41 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:03.316 23:00:41 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:03.316 23:00:41 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57899' 00:19:03.316 23:00:41 alias_rpc -- common/autotest_common.sh@973 -- # kill 57899 00:19:03.316 23:00:41 alias_rpc -- common/autotest_common.sh@978 -- # wait 57899 00:19:05.229 ************************************ 00:19:05.229 END TEST alias_rpc 00:19:05.229 ************************************ 00:19:05.229 00:19:05.229 real 0m2.896s 00:19:05.229 user 0m2.985s 00:19:05.229 sys 0m0.420s 00:19:05.229 23:00:43 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:05.229 23:00:43 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:05.229 23:00:43 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:19:05.229 23:00:43 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:19:05.229 23:00:43 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:05.229 23:00:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:05.229 23:00:43 -- common/autotest_common.sh@10 -- # set +x 00:19:05.229 ************************************ 00:19:05.229 START TEST spdkcli_tcp 00:19:05.229 ************************************ 00:19:05.229 23:00:43 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:19:05.229 * Looking for test storage... 00:19:05.229 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:19:05.229 23:00:43 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:05.229 23:00:43 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:05.229 23:00:43 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:19:05.229 23:00:43 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:05.229 23:00:43 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:05.229 23:00:43 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:05.229 23:00:43 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:05.229 23:00:43 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:19:05.229 23:00:43 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:19:05.229 23:00:43 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:19:05.230 23:00:43 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:19:05.230 23:00:43 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:19:05.230 23:00:43 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:19:05.230 23:00:43 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:19:05.230 23:00:43 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:05.230 23:00:43 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:19:05.230 23:00:43 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:19:05.230 23:00:43 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:05.230 23:00:43 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:05.230 23:00:43 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:19:05.230 23:00:43 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:19:05.230 23:00:43 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:05.230 23:00:43 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:19:05.230 23:00:43 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:19:05.230 23:00:43 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:19:05.230 23:00:43 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:19:05.230 23:00:43 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:05.230 23:00:43 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:19:05.230 23:00:43 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:19:05.230 23:00:43 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:05.230 23:00:43 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:05.230 23:00:43 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:19:05.230 23:00:43 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:05.230 23:00:43 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:05.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:05.230 --rc genhtml_branch_coverage=1 00:19:05.230 --rc genhtml_function_coverage=1 00:19:05.230 --rc genhtml_legend=1 00:19:05.230 --rc geninfo_all_blocks=1 00:19:05.230 --rc geninfo_unexecuted_blocks=1 00:19:05.230 00:19:05.230 ' 00:19:05.230 23:00:43 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:05.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:05.230 --rc genhtml_branch_coverage=1 00:19:05.230 --rc genhtml_function_coverage=1 00:19:05.230 --rc genhtml_legend=1 00:19:05.230 --rc geninfo_all_blocks=1 00:19:05.230 --rc geninfo_unexecuted_blocks=1 00:19:05.230 00:19:05.230 ' 00:19:05.230 23:00:43 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:05.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:05.230 --rc genhtml_branch_coverage=1 00:19:05.230 --rc genhtml_function_coverage=1 00:19:05.230 --rc genhtml_legend=1 00:19:05.230 --rc geninfo_all_blocks=1 00:19:05.230 --rc geninfo_unexecuted_blocks=1 00:19:05.230 00:19:05.230 ' 00:19:05.230 23:00:43 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:05.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:05.230 --rc genhtml_branch_coverage=1 00:19:05.230 --rc genhtml_function_coverage=1 00:19:05.230 --rc genhtml_legend=1 00:19:05.230 --rc geninfo_all_blocks=1 00:19:05.230 --rc geninfo_unexecuted_blocks=1 00:19:05.230 00:19:05.230 ' 00:19:05.230 23:00:43 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:19:05.230 23:00:43 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:19:05.230 23:00:43 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:19:05.230 23:00:43 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:19:05.230 23:00:43 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:19:05.230 23:00:43 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:05.230 23:00:43 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:19:05.230 23:00:43 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:05.230 23:00:43 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:05.230 23:00:43 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57997 00:19:05.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:05.230 23:00:43 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57997 00:19:05.230 23:00:43 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 57997 ']' 00:19:05.230 23:00:43 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:05.230 23:00:43 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:05.230 23:00:43 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:19:05.230 23:00:43 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:05.230 23:00:43 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:05.230 23:00:43 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:05.230 [2024-12-09 23:00:43.609989] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:19:05.230 [2024-12-09 23:00:43.610112] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57997 ] 00:19:05.495 [2024-12-09 23:00:43.774133] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:05.495 [2024-12-09 23:00:43.879295] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:05.495 [2024-12-09 23:00:43.879296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:06.081 23:00:44 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:06.081 23:00:44 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:19:06.081 23:00:44 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=58014 00:19:06.081 23:00:44 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:19:06.081 23:00:44 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:19:06.343 [ 00:19:06.343 "bdev_malloc_delete", 00:19:06.343 "bdev_malloc_create", 00:19:06.343 "bdev_null_resize", 00:19:06.343 "bdev_null_delete", 00:19:06.343 "bdev_null_create", 00:19:06.343 "bdev_nvme_cuse_unregister", 00:19:06.343 "bdev_nvme_cuse_register", 00:19:06.343 "bdev_opal_new_user", 00:19:06.343 "bdev_opal_set_lock_state", 00:19:06.343 "bdev_opal_delete", 00:19:06.343 "bdev_opal_get_info", 00:19:06.343 "bdev_opal_create", 00:19:06.343 "bdev_nvme_opal_revert", 00:19:06.343 "bdev_nvme_opal_init", 00:19:06.343 "bdev_nvme_send_cmd", 00:19:06.343 "bdev_nvme_set_keys", 00:19:06.343 "bdev_nvme_get_path_iostat", 00:19:06.343 "bdev_nvme_get_mdns_discovery_info", 00:19:06.343 "bdev_nvme_stop_mdns_discovery", 00:19:06.343 "bdev_nvme_start_mdns_discovery", 00:19:06.343 "bdev_nvme_set_multipath_policy", 00:19:06.343 "bdev_nvme_set_preferred_path", 00:19:06.343 "bdev_nvme_get_io_paths", 00:19:06.343 "bdev_nvme_remove_error_injection", 00:19:06.343 "bdev_nvme_add_error_injection", 00:19:06.343 "bdev_nvme_get_discovery_info", 00:19:06.343 "bdev_nvme_stop_discovery", 00:19:06.343 "bdev_nvme_start_discovery", 00:19:06.343 "bdev_nvme_get_controller_health_info", 00:19:06.343 "bdev_nvme_disable_controller", 00:19:06.343 "bdev_nvme_enable_controller", 00:19:06.343 "bdev_nvme_reset_controller", 00:19:06.343 "bdev_nvme_get_transport_statistics", 00:19:06.343 "bdev_nvme_apply_firmware", 00:19:06.343 "bdev_nvme_detach_controller", 00:19:06.343 "bdev_nvme_get_controllers", 00:19:06.343 "bdev_nvme_attach_controller", 00:19:06.343 "bdev_nvme_set_hotplug", 00:19:06.343 "bdev_nvme_set_options", 00:19:06.343 "bdev_passthru_delete", 00:19:06.343 "bdev_passthru_create", 00:19:06.343 "bdev_lvol_set_parent_bdev", 00:19:06.343 "bdev_lvol_set_parent", 00:19:06.343 "bdev_lvol_check_shallow_copy", 00:19:06.343 "bdev_lvol_start_shallow_copy", 00:19:06.343 "bdev_lvol_grow_lvstore", 00:19:06.343 "bdev_lvol_get_lvols", 00:19:06.343 "bdev_lvol_get_lvstores", 00:19:06.343 "bdev_lvol_delete", 00:19:06.343 "bdev_lvol_set_read_only", 00:19:06.343 "bdev_lvol_resize", 00:19:06.343 "bdev_lvol_decouple_parent", 00:19:06.343 "bdev_lvol_inflate", 00:19:06.343 "bdev_lvol_rename", 00:19:06.343 "bdev_lvol_clone_bdev", 00:19:06.343 "bdev_lvol_clone", 00:19:06.343 "bdev_lvol_snapshot", 00:19:06.343 "bdev_lvol_create", 00:19:06.343 "bdev_lvol_delete_lvstore", 00:19:06.343 "bdev_lvol_rename_lvstore", 00:19:06.343 "bdev_lvol_create_lvstore", 00:19:06.343 "bdev_raid_set_options", 00:19:06.343 "bdev_raid_remove_base_bdev", 00:19:06.343 "bdev_raid_add_base_bdev", 00:19:06.343 "bdev_raid_delete", 00:19:06.343 "bdev_raid_create", 00:19:06.343 "bdev_raid_get_bdevs", 00:19:06.343 "bdev_error_inject_error", 00:19:06.343 "bdev_error_delete", 00:19:06.343 "bdev_error_create", 00:19:06.343 "bdev_split_delete", 00:19:06.343 "bdev_split_create", 00:19:06.343 "bdev_delay_delete", 00:19:06.343 "bdev_delay_create", 00:19:06.343 "bdev_delay_update_latency", 00:19:06.343 "bdev_zone_block_delete", 00:19:06.343 "bdev_zone_block_create", 00:19:06.343 "blobfs_create", 00:19:06.343 "blobfs_detect", 00:19:06.343 "blobfs_set_cache_size", 00:19:06.343 "bdev_xnvme_delete", 00:19:06.343 "bdev_xnvme_create", 00:19:06.343 "bdev_aio_delete", 00:19:06.343 "bdev_aio_rescan", 00:19:06.343 "bdev_aio_create", 00:19:06.343 "bdev_ftl_set_property", 00:19:06.343 "bdev_ftl_get_properties", 00:19:06.343 "bdev_ftl_get_stats", 00:19:06.343 "bdev_ftl_unmap", 00:19:06.343 "bdev_ftl_unload", 00:19:06.343 "bdev_ftl_delete", 00:19:06.343 "bdev_ftl_load", 00:19:06.343 "bdev_ftl_create", 00:19:06.343 "bdev_virtio_attach_controller", 00:19:06.343 "bdev_virtio_scsi_get_devices", 00:19:06.343 "bdev_virtio_detach_controller", 00:19:06.343 "bdev_virtio_blk_set_hotplug", 00:19:06.343 "bdev_iscsi_delete", 00:19:06.343 "bdev_iscsi_create", 00:19:06.343 "bdev_iscsi_set_options", 00:19:06.343 "accel_error_inject_error", 00:19:06.343 "ioat_scan_accel_module", 00:19:06.343 "dsa_scan_accel_module", 00:19:06.343 "iaa_scan_accel_module", 00:19:06.343 "keyring_file_remove_key", 00:19:06.343 "keyring_file_add_key", 00:19:06.343 "keyring_linux_set_options", 00:19:06.343 "fsdev_aio_delete", 00:19:06.343 "fsdev_aio_create", 00:19:06.343 "iscsi_get_histogram", 00:19:06.343 "iscsi_enable_histogram", 00:19:06.343 "iscsi_set_options", 00:19:06.343 "iscsi_get_auth_groups", 00:19:06.343 "iscsi_auth_group_remove_secret", 00:19:06.343 "iscsi_auth_group_add_secret", 00:19:06.343 "iscsi_delete_auth_group", 00:19:06.343 "iscsi_create_auth_group", 00:19:06.343 "iscsi_set_discovery_auth", 00:19:06.343 "iscsi_get_options", 00:19:06.343 "iscsi_target_node_request_logout", 00:19:06.343 "iscsi_target_node_set_redirect", 00:19:06.343 "iscsi_target_node_set_auth", 00:19:06.343 "iscsi_target_node_add_lun", 00:19:06.343 "iscsi_get_stats", 00:19:06.343 "iscsi_get_connections", 00:19:06.344 "iscsi_portal_group_set_auth", 00:19:06.344 "iscsi_start_portal_group", 00:19:06.344 "iscsi_delete_portal_group", 00:19:06.344 "iscsi_create_portal_group", 00:19:06.344 "iscsi_get_portal_groups", 00:19:06.344 "iscsi_delete_target_node", 00:19:06.344 "iscsi_target_node_remove_pg_ig_maps", 00:19:06.344 "iscsi_target_node_add_pg_ig_maps", 00:19:06.344 "iscsi_create_target_node", 00:19:06.344 "iscsi_get_target_nodes", 00:19:06.344 "iscsi_delete_initiator_group", 00:19:06.344 "iscsi_initiator_group_remove_initiators", 00:19:06.344 "iscsi_initiator_group_add_initiators", 00:19:06.344 "iscsi_create_initiator_group", 00:19:06.344 "iscsi_get_initiator_groups", 00:19:06.344 "nvmf_set_crdt", 00:19:06.344 "nvmf_set_config", 00:19:06.344 "nvmf_set_max_subsystems", 00:19:06.344 "nvmf_stop_mdns_prr", 00:19:06.344 "nvmf_publish_mdns_prr", 00:19:06.344 "nvmf_subsystem_get_listeners", 00:19:06.344 "nvmf_subsystem_get_qpairs", 00:19:06.344 "nvmf_subsystem_get_controllers", 00:19:06.344 "nvmf_get_stats", 00:19:06.344 "nvmf_get_transports", 00:19:06.344 "nvmf_create_transport", 00:19:06.344 "nvmf_get_targets", 00:19:06.344 "nvmf_delete_target", 00:19:06.344 "nvmf_create_target", 00:19:06.344 "nvmf_subsystem_allow_any_host", 00:19:06.344 "nvmf_subsystem_set_keys", 00:19:06.344 "nvmf_subsystem_remove_host", 00:19:06.344 "nvmf_subsystem_add_host", 00:19:06.344 "nvmf_ns_remove_host", 00:19:06.344 "nvmf_ns_add_host", 00:19:06.344 "nvmf_subsystem_remove_ns", 00:19:06.344 "nvmf_subsystem_set_ns_ana_group", 00:19:06.344 "nvmf_subsystem_add_ns", 00:19:06.344 "nvmf_subsystem_listener_set_ana_state", 00:19:06.344 "nvmf_discovery_get_referrals", 00:19:06.344 "nvmf_discovery_remove_referral", 00:19:06.344 "nvmf_discovery_add_referral", 00:19:06.344 "nvmf_subsystem_remove_listener", 00:19:06.344 "nvmf_subsystem_add_listener", 00:19:06.344 "nvmf_delete_subsystem", 00:19:06.344 "nvmf_create_subsystem", 00:19:06.344 "nvmf_get_subsystems", 00:19:06.344 "env_dpdk_get_mem_stats", 00:19:06.344 "nbd_get_disks", 00:19:06.344 "nbd_stop_disk", 00:19:06.344 "nbd_start_disk", 00:19:06.344 "ublk_recover_disk", 00:19:06.344 "ublk_get_disks", 00:19:06.344 "ublk_stop_disk", 00:19:06.344 "ublk_start_disk", 00:19:06.344 "ublk_destroy_target", 00:19:06.344 "ublk_create_target", 00:19:06.344 "virtio_blk_create_transport", 00:19:06.344 "virtio_blk_get_transports", 00:19:06.344 "vhost_controller_set_coalescing", 00:19:06.344 "vhost_get_controllers", 00:19:06.344 "vhost_delete_controller", 00:19:06.344 "vhost_create_blk_controller", 00:19:06.344 "vhost_scsi_controller_remove_target", 00:19:06.344 "vhost_scsi_controller_add_target", 00:19:06.344 "vhost_start_scsi_controller", 00:19:06.344 "vhost_create_scsi_controller", 00:19:06.344 "thread_set_cpumask", 00:19:06.344 "scheduler_set_options", 00:19:06.344 "framework_get_governor", 00:19:06.344 "framework_get_scheduler", 00:19:06.344 "framework_set_scheduler", 00:19:06.344 "framework_get_reactors", 00:19:06.344 "thread_get_io_channels", 00:19:06.344 "thread_get_pollers", 00:19:06.344 "thread_get_stats", 00:19:06.344 "framework_monitor_context_switch", 00:19:06.344 "spdk_kill_instance", 00:19:06.344 "log_enable_timestamps", 00:19:06.344 "log_get_flags", 00:19:06.344 "log_clear_flag", 00:19:06.344 "log_set_flag", 00:19:06.344 "log_get_level", 00:19:06.344 "log_set_level", 00:19:06.344 "log_get_print_level", 00:19:06.344 "log_set_print_level", 00:19:06.344 "framework_enable_cpumask_locks", 00:19:06.344 "framework_disable_cpumask_locks", 00:19:06.344 "framework_wait_init", 00:19:06.344 "framework_start_init", 00:19:06.344 "scsi_get_devices", 00:19:06.344 "bdev_get_histogram", 00:19:06.344 "bdev_enable_histogram", 00:19:06.344 "bdev_set_qos_limit", 00:19:06.344 "bdev_set_qd_sampling_period", 00:19:06.344 "bdev_get_bdevs", 00:19:06.344 "bdev_reset_iostat", 00:19:06.344 "bdev_get_iostat", 00:19:06.344 "bdev_examine", 00:19:06.344 "bdev_wait_for_examine", 00:19:06.344 "bdev_set_options", 00:19:06.344 "accel_get_stats", 00:19:06.344 "accel_set_options", 00:19:06.344 "accel_set_driver", 00:19:06.344 "accel_crypto_key_destroy", 00:19:06.344 "accel_crypto_keys_get", 00:19:06.344 "accel_crypto_key_create", 00:19:06.344 "accel_assign_opc", 00:19:06.344 "accel_get_module_info", 00:19:06.344 "accel_get_opc_assignments", 00:19:06.344 "vmd_rescan", 00:19:06.344 "vmd_remove_device", 00:19:06.344 "vmd_enable", 00:19:06.344 "sock_get_default_impl", 00:19:06.344 "sock_set_default_impl", 00:19:06.344 "sock_impl_set_options", 00:19:06.344 "sock_impl_get_options", 00:19:06.344 "iobuf_get_stats", 00:19:06.344 "iobuf_set_options", 00:19:06.344 "keyring_get_keys", 00:19:06.344 "framework_get_pci_devices", 00:19:06.344 "framework_get_config", 00:19:06.344 "framework_get_subsystems", 00:19:06.344 "fsdev_set_opts", 00:19:06.344 "fsdev_get_opts", 00:19:06.344 "trace_get_info", 00:19:06.344 "trace_get_tpoint_group_mask", 00:19:06.344 "trace_disable_tpoint_group", 00:19:06.344 "trace_enable_tpoint_group", 00:19:06.344 "trace_clear_tpoint_mask", 00:19:06.344 "trace_set_tpoint_mask", 00:19:06.344 "notify_get_notifications", 00:19:06.344 "notify_get_types", 00:19:06.344 "spdk_get_version", 00:19:06.344 "rpc_get_methods" 00:19:06.344 ] 00:19:06.344 23:00:44 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:19:06.344 23:00:44 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:06.344 23:00:44 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:06.344 23:00:44 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:19:06.344 23:00:44 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57997 00:19:06.344 23:00:44 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 57997 ']' 00:19:06.344 23:00:44 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 57997 00:19:06.344 23:00:44 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:19:06.344 23:00:44 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:06.344 23:00:44 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57997 00:19:06.344 killing process with pid 57997 00:19:06.344 23:00:44 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:06.344 23:00:44 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:06.344 23:00:44 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57997' 00:19:06.344 23:00:44 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 57997 00:19:06.344 23:00:44 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 57997 00:19:08.267 ************************************ 00:19:08.267 END TEST spdkcli_tcp 00:19:08.267 ************************************ 00:19:08.267 00:19:08.267 real 0m2.886s 00:19:08.267 user 0m5.173s 00:19:08.267 sys 0m0.457s 00:19:08.267 23:00:46 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:08.267 23:00:46 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:08.267 23:00:46 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:19:08.267 23:00:46 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:08.267 23:00:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:08.267 23:00:46 -- common/autotest_common.sh@10 -- # set +x 00:19:08.267 ************************************ 00:19:08.267 START TEST dpdk_mem_utility 00:19:08.267 ************************************ 00:19:08.267 23:00:46 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:19:08.267 * Looking for test storage... 00:19:08.267 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:19:08.267 23:00:46 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:08.267 23:00:46 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:08.267 23:00:46 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:19:08.267 23:00:46 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:08.267 23:00:46 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:08.267 23:00:46 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:08.267 23:00:46 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:08.267 23:00:46 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:19:08.267 23:00:46 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:19:08.267 23:00:46 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:19:08.267 23:00:46 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:19:08.267 23:00:46 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:19:08.267 23:00:46 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:19:08.267 23:00:46 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:19:08.267 23:00:46 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:08.267 23:00:46 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:19:08.267 23:00:46 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:19:08.267 23:00:46 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:08.267 23:00:46 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:08.267 23:00:46 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:19:08.267 23:00:46 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:19:08.267 23:00:46 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:08.267 23:00:46 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:19:08.267 23:00:46 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:19:08.267 23:00:46 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:19:08.267 23:00:46 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:19:08.267 23:00:46 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:08.267 23:00:46 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:19:08.267 23:00:46 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:19:08.267 23:00:46 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:08.267 23:00:46 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:08.267 23:00:46 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:19:08.267 23:00:46 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:08.268 23:00:46 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:08.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:08.268 --rc genhtml_branch_coverage=1 00:19:08.268 --rc genhtml_function_coverage=1 00:19:08.268 --rc genhtml_legend=1 00:19:08.268 --rc geninfo_all_blocks=1 00:19:08.268 --rc geninfo_unexecuted_blocks=1 00:19:08.268 00:19:08.268 ' 00:19:08.268 23:00:46 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:08.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:08.268 --rc genhtml_branch_coverage=1 00:19:08.268 --rc genhtml_function_coverage=1 00:19:08.268 --rc genhtml_legend=1 00:19:08.268 --rc geninfo_all_blocks=1 00:19:08.268 --rc geninfo_unexecuted_blocks=1 00:19:08.268 00:19:08.268 ' 00:19:08.268 23:00:46 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:08.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:08.268 --rc genhtml_branch_coverage=1 00:19:08.268 --rc genhtml_function_coverage=1 00:19:08.268 --rc genhtml_legend=1 00:19:08.268 --rc geninfo_all_blocks=1 00:19:08.268 --rc geninfo_unexecuted_blocks=1 00:19:08.268 00:19:08.268 ' 00:19:08.268 23:00:46 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:08.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:08.268 --rc genhtml_branch_coverage=1 00:19:08.268 --rc genhtml_function_coverage=1 00:19:08.268 --rc genhtml_legend=1 00:19:08.268 --rc geninfo_all_blocks=1 00:19:08.268 --rc geninfo_unexecuted_blocks=1 00:19:08.268 00:19:08.268 ' 00:19:08.268 23:00:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:19:08.268 23:00:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58108 00:19:08.268 23:00:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58108 00:19:08.268 23:00:46 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 58108 ']' 00:19:08.268 23:00:46 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:08.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:08.268 23:00:46 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:08.268 23:00:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:08.268 23:00:46 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:08.268 23:00:46 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:08.268 23:00:46 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:19:08.268 [2024-12-09 23:00:46.552270] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:19:08.268 [2024-12-09 23:00:46.552934] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58108 ] 00:19:08.268 [2024-12-09 23:00:46.712859] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:08.529 [2024-12-09 23:00:46.845790] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:09.102 23:00:47 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:09.102 23:00:47 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:19:09.102 23:00:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:19:09.102 23:00:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:19:09.102 23:00:47 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.102 23:00:47 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:19:09.362 { 00:19:09.362 "filename": "/tmp/spdk_mem_dump.txt" 00:19:09.362 } 00:19:09.362 23:00:47 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.362 23:00:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:19:09.362 DPDK memory size 824.000000 MiB in 1 heap(s) 00:19:09.362 1 heaps totaling size 824.000000 MiB 00:19:09.362 size: 824.000000 MiB heap id: 0 00:19:09.362 end heaps---------- 00:19:09.362 9 mempools totaling size 603.782043 MiB 00:19:09.362 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:19:09.362 size: 158.602051 MiB name: PDU_data_out_Pool 00:19:09.362 size: 100.555481 MiB name: bdev_io_58108 00:19:09.362 size: 50.003479 MiB name: msgpool_58108 00:19:09.362 size: 36.509338 MiB name: fsdev_io_58108 00:19:09.362 size: 21.763794 MiB name: PDU_Pool 00:19:09.362 size: 19.513306 MiB name: SCSI_TASK_Pool 00:19:09.362 size: 4.133484 MiB name: evtpool_58108 00:19:09.362 size: 0.026123 MiB name: Session_Pool 00:19:09.362 end mempools------- 00:19:09.362 6 memzones totaling size 4.142822 MiB 00:19:09.362 size: 1.000366 MiB name: RG_ring_0_58108 00:19:09.362 size: 1.000366 MiB name: RG_ring_1_58108 00:19:09.362 size: 1.000366 MiB name: RG_ring_4_58108 00:19:09.362 size: 1.000366 MiB name: RG_ring_5_58108 00:19:09.362 size: 0.125366 MiB name: RG_ring_2_58108 00:19:09.362 size: 0.015991 MiB name: RG_ring_3_58108 00:19:09.362 end memzones------- 00:19:09.362 23:00:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:19:09.362 heap id: 0 total size: 824.000000 MiB number of busy elements: 329 number of free elements: 18 00:19:09.362 list of free elements. size: 16.777954 MiB 00:19:09.362 element at address: 0x200006400000 with size: 1.995972 MiB 00:19:09.362 element at address: 0x20000a600000 with size: 1.995972 MiB 00:19:09.362 element at address: 0x200003e00000 with size: 1.991028 MiB 00:19:09.362 element at address: 0x200019500040 with size: 0.999939 MiB 00:19:09.362 element at address: 0x200019900040 with size: 0.999939 MiB 00:19:09.362 element at address: 0x200019a00000 with size: 0.999084 MiB 00:19:09.362 element at address: 0x200032600000 with size: 0.994324 MiB 00:19:09.362 element at address: 0x200000400000 with size: 0.992004 MiB 00:19:09.362 element at address: 0x200019200000 with size: 0.959656 MiB 00:19:09.362 element at address: 0x200019d00040 with size: 0.936401 MiB 00:19:09.362 element at address: 0x200000200000 with size: 0.716980 MiB 00:19:09.362 element at address: 0x20001b400000 with size: 0.559021 MiB 00:19:09.362 element at address: 0x200000c00000 with size: 0.489197 MiB 00:19:09.362 element at address: 0x200019600000 with size: 0.487976 MiB 00:19:09.362 element at address: 0x200019e00000 with size: 0.485413 MiB 00:19:09.362 element at address: 0x200012c00000 with size: 0.433716 MiB 00:19:09.362 element at address: 0x200028800000 with size: 0.390442 MiB 00:19:09.362 element at address: 0x200000800000 with size: 0.350891 MiB 00:19:09.362 list of standard malloc elements. size: 199.291138 MiB 00:19:09.362 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:19:09.362 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:19:09.362 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:19:09.362 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:19:09.362 element at address: 0x200019bfff80 with size: 1.000183 MiB 00:19:09.362 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:19:09.362 element at address: 0x200019deff40 with size: 0.062683 MiB 00:19:09.362 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:19:09.362 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:19:09.362 element at address: 0x200019defdc0 with size: 0.000366 MiB 00:19:09.362 element at address: 0x200012bff040 with size: 0.000305 MiB 00:19:09.362 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:19:09.362 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:19:09.362 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:19:09.362 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:19:09.362 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:19:09.362 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:19:09.362 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:19:09.362 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:19:09.362 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:19:09.362 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:19:09.362 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:19:09.362 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:19:09.362 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:19:09.362 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:19:09.362 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:19:09.362 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:19:09.362 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:19:09.362 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:19:09.362 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:19:09.362 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:19:09.362 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:19:09.362 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:19:09.362 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:19:09.362 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:19:09.362 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:19:09.362 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:19:09.362 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:19:09.362 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:19:09.362 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:19:09.362 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:19:09.362 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:19:09.362 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:19:09.362 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:19:09.362 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:19:09.362 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:19:09.362 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:19:09.362 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:19:09.362 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:19:09.362 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:19:09.362 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:19:09.362 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:19:09.362 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:19:09.362 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:19:09.362 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:19:09.362 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:19:09.362 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:19:09.362 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:19:09.362 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:19:09.362 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:19:09.362 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:19:09.362 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:19:09.362 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:19:09.362 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:19:09.362 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:19:09.362 element at address: 0x200000c7d3c0 with size: 0.000244 MiB 00:19:09.362 element at address: 0x200000c7d4c0 with size: 0.000244 MiB 00:19:09.362 element at address: 0x200000c7d5c0 with size: 0.000244 MiB 00:19:09.362 element at address: 0x200000c7d6c0 with size: 0.000244 MiB 00:19:09.362 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:19:09.362 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:19:09.362 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:19:09.362 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:19:09.362 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:19:09.362 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:19:09.362 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:19:09.362 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:19:09.362 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:19:09.362 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:19:09.362 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:19:09.362 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:19:09.362 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:19:09.362 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:19:09.362 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:19:09.362 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:19:09.362 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:19:09.362 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:19:09.362 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:19:09.362 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:19:09.362 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:19:09.362 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:19:09.362 element at address: 0x200000cff000 with size: 0.000244 MiB 00:19:09.362 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:19:09.362 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:19:09.362 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:19:09.362 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:19:09.362 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:19:09.362 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:19:09.362 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:19:09.362 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:19:09.362 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:19:09.362 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:19:09.362 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:19:09.362 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:19:09.362 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:19:09.362 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:19:09.362 element at address: 0x200012bff180 with size: 0.000244 MiB 00:19:09.362 element at address: 0x200012bff280 with size: 0.000244 MiB 00:19:09.362 element at address: 0x200012bff380 with size: 0.000244 MiB 00:19:09.362 element at address: 0x200012bff480 with size: 0.000244 MiB 00:19:09.362 element at address: 0x200012bff580 with size: 0.000244 MiB 00:19:09.362 element at address: 0x200012bff680 with size: 0.000244 MiB 00:19:09.362 element at address: 0x200012bff780 with size: 0.000244 MiB 00:19:09.362 element at address: 0x200012bff880 with size: 0.000244 MiB 00:19:09.362 element at address: 0x200012bff980 with size: 0.000244 MiB 00:19:09.362 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:19:09.362 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:19:09.362 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:19:09.362 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:19:09.362 element at address: 0x200012c6f080 with size: 0.000244 MiB 00:19:09.362 element at address: 0x200012c6f180 with size: 0.000244 MiB 00:19:09.362 element at address: 0x200012c6f280 with size: 0.000244 MiB 00:19:09.362 element at address: 0x200012c6f380 with size: 0.000244 MiB 00:19:09.362 element at address: 0x200012c6f480 with size: 0.000244 MiB 00:19:09.362 element at address: 0x200012c6f580 with size: 0.000244 MiB 00:19:09.362 element at address: 0x200012c6f680 with size: 0.000244 MiB 00:19:09.362 element at address: 0x200012c6f780 with size: 0.000244 MiB 00:19:09.362 element at address: 0x200012c6f880 with size: 0.000244 MiB 00:19:09.362 element at address: 0x200012cefbc0 with size: 0.000244 MiB 00:19:09.362 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:19:09.362 element at address: 0x20001967cec0 with size: 0.000244 MiB 00:19:09.362 element at address: 0x20001967cfc0 with size: 0.000244 MiB 00:19:09.362 element at address: 0x20001967d0c0 with size: 0.000244 MiB 00:19:09.362 element at address: 0x20001967d1c0 with size: 0.000244 MiB 00:19:09.362 element at address: 0x20001967d2c0 with size: 0.000244 MiB 00:19:09.362 element at address: 0x20001967d3c0 with size: 0.000244 MiB 00:19:09.362 element at address: 0x20001967d4c0 with size: 0.000244 MiB 00:19:09.362 element at address: 0x20001967d5c0 with size: 0.000244 MiB 00:19:09.362 element at address: 0x20001967d6c0 with size: 0.000244 MiB 00:19:09.362 element at address: 0x20001967d7c0 with size: 0.000244 MiB 00:19:09.362 element at address: 0x20001967d8c0 with size: 0.000244 MiB 00:19:09.362 element at address: 0x20001967d9c0 with size: 0.000244 MiB 00:19:09.362 element at address: 0x2000196fdd00 with size: 0.000244 MiB 00:19:09.362 element at address: 0x200019affc40 with size: 0.000244 MiB 00:19:09.362 element at address: 0x200019defbc0 with size: 0.000244 MiB 00:19:09.362 element at address: 0x200019defcc0 with size: 0.000244 MiB 00:19:09.362 element at address: 0x200019ebc680 with size: 0.000244 MiB 00:19:09.362 element at address: 0x20001b48f1c0 with size: 0.000244 MiB 00:19:09.362 element at address: 0x20001b48f2c0 with size: 0.000244 MiB 00:19:09.362 element at address: 0x20001b48f3c0 with size: 0.000244 MiB 00:19:09.362 element at address: 0x20001b48f4c0 with size: 0.000244 MiB 00:19:09.362 element at address: 0x20001b48f5c0 with size: 0.000244 MiB 00:19:09.362 element at address: 0x20001b48f6c0 with size: 0.000244 MiB 00:19:09.362 element at address: 0x20001b48f7c0 with size: 0.000244 MiB 00:19:09.362 element at address: 0x20001b48f8c0 with size: 0.000244 MiB 00:19:09.362 element at address: 0x20001b48f9c0 with size: 0.000244 MiB 00:19:09.362 element at address: 0x20001b48fac0 with size: 0.000244 MiB 00:19:09.362 element at address: 0x20001b48fbc0 with size: 0.000244 MiB 00:19:09.362 element at address: 0x20001b48fcc0 with size: 0.000244 MiB 00:19:09.362 element at address: 0x20001b48fdc0 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20001b48fec0 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20001b48ffc0 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20001b4900c0 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20001b4901c0 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20001b4902c0 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20001b4903c0 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20001b4904c0 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20001b4905c0 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20001b4906c0 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20001b4907c0 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20001b4908c0 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20001b4909c0 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20001b490ac0 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20001b490bc0 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20001b490cc0 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20001b490dc0 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20001b490ec0 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20001b490fc0 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20001b4910c0 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20001b4911c0 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20001b4912c0 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20001b4913c0 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20001b4914c0 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20001b4915c0 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20001b4916c0 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20001b4917c0 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20001b4918c0 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20001b4919c0 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20001b491ac0 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20001b491bc0 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20001b491cc0 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20001b491dc0 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20001b491ec0 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20001b491fc0 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20001b4920c0 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20001b4921c0 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20001b4922c0 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20001b4923c0 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20001b4924c0 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20001b4925c0 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20001b4926c0 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20001b4927c0 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20001b4928c0 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20001b4929c0 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20001b492ac0 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20001b492bc0 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20001b492cc0 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20001b492dc0 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20001b492ec0 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20001b492fc0 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20001b4930c0 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20001b4931c0 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20001b4932c0 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20001b4933c0 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20001b4934c0 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20001b4935c0 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20001b4936c0 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20001b4937c0 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20001b4938c0 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20001b4939c0 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20001b493ac0 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20001b493bc0 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20001b493cc0 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20001b493dc0 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20001b493ec0 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20001b493fc0 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20001b4940c0 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20001b4941c0 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20001b4942c0 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20001b4943c0 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20001b4944c0 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20001b4945c0 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20001b4946c0 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20001b4947c0 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20001b4948c0 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20001b4949c0 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20001b494ac0 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20001b494bc0 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20001b494cc0 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20001b494dc0 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20001b494ec0 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20001b494fc0 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20001b4950c0 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20001b4951c0 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20001b4952c0 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20001b4953c0 with size: 0.000244 MiB 00:19:09.363 element at address: 0x200028863f40 with size: 0.000244 MiB 00:19:09.363 element at address: 0x200028864040 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20002886ad00 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20002886af80 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20002886b080 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20002886b180 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20002886b280 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20002886b380 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20002886b480 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20002886b580 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20002886b680 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20002886b780 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20002886b880 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20002886b980 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20002886ba80 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20002886bb80 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20002886bc80 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20002886bd80 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20002886be80 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20002886bf80 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20002886c080 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20002886c180 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20002886c280 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20002886c380 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20002886c480 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20002886c580 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20002886c680 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20002886c780 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20002886c880 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20002886c980 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20002886ca80 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20002886cb80 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20002886cc80 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20002886cd80 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20002886ce80 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20002886cf80 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20002886d080 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20002886d180 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20002886d280 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20002886d380 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20002886d480 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20002886d580 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20002886d680 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20002886d780 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20002886d880 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20002886d980 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20002886da80 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20002886db80 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20002886dc80 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20002886dd80 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20002886de80 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20002886df80 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20002886e080 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20002886e180 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20002886e280 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20002886e380 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20002886e480 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20002886e580 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20002886e680 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20002886e780 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20002886e880 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20002886e980 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20002886ea80 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20002886eb80 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20002886ec80 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20002886ed80 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20002886ee80 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20002886ef80 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20002886f080 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20002886f180 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20002886f280 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20002886f380 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20002886f480 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20002886f580 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20002886f680 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20002886f780 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20002886f880 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20002886f980 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20002886fa80 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20002886fb80 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20002886fc80 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20002886fd80 with size: 0.000244 MiB 00:19:09.363 element at address: 0x20002886fe80 with size: 0.000244 MiB 00:19:09.363 list of memzone associated elements. size: 607.930908 MiB 00:19:09.363 element at address: 0x20001b4954c0 with size: 211.416809 MiB 00:19:09.363 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:19:09.363 element at address: 0x20002886ff80 with size: 157.562622 MiB 00:19:09.363 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:19:09.363 element at address: 0x200012df1e40 with size: 100.055115 MiB 00:19:09.363 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_58108_0 00:19:09.363 element at address: 0x200000dff340 with size: 48.003113 MiB 00:19:09.363 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58108_0 00:19:09.363 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:19:09.363 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58108_0 00:19:09.363 element at address: 0x200019fbe900 with size: 20.255615 MiB 00:19:09.363 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:19:09.363 element at address: 0x2000327feb00 with size: 18.005127 MiB 00:19:09.363 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:19:09.363 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:19:09.363 associated memzone info: size: 3.000122 MiB name: MP_evtpool_58108_0 00:19:09.363 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:19:09.363 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58108 00:19:09.363 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:19:09.363 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58108 00:19:09.363 element at address: 0x2000196fde00 with size: 1.008179 MiB 00:19:09.363 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:19:09.363 element at address: 0x200019ebc780 with size: 1.008179 MiB 00:19:09.363 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:19:09.363 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:19:09.363 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:19:09.363 element at address: 0x200012cefcc0 with size: 1.008179 MiB 00:19:09.363 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:19:09.363 element at address: 0x200000cff100 with size: 1.000549 MiB 00:19:09.363 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58108 00:19:09.363 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:19:09.363 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58108 00:19:09.363 element at address: 0x200019affd40 with size: 1.000549 MiB 00:19:09.363 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58108 00:19:09.363 element at address: 0x2000326fe8c0 with size: 1.000549 MiB 00:19:09.363 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58108 00:19:09.363 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:19:09.363 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58108 00:19:09.363 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:19:09.363 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58108 00:19:09.363 element at address: 0x20001967dac0 with size: 0.500549 MiB 00:19:09.363 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:19:09.363 element at address: 0x200012c6f980 with size: 0.500549 MiB 00:19:09.363 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:19:09.363 element at address: 0x200019e7c440 with size: 0.250549 MiB 00:19:09.363 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:19:09.363 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:19:09.363 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_58108 00:19:09.363 element at address: 0x20000085df80 with size: 0.125549 MiB 00:19:09.363 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58108 00:19:09.363 element at address: 0x2000192f5ac0 with size: 0.031799 MiB 00:19:09.363 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:19:09.363 element at address: 0x200028864140 with size: 0.023804 MiB 00:19:09.363 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:19:09.363 element at address: 0x200000859d40 with size: 0.016174 MiB 00:19:09.363 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58108 00:19:09.363 element at address: 0x20002886a2c0 with size: 0.002502 MiB 00:19:09.363 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:19:09.363 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:19:09.363 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58108 00:19:09.363 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:19:09.363 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58108 00:19:09.363 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:19:09.363 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58108 00:19:09.363 element at address: 0x20002886ae00 with size: 0.000366 MiB 00:19:09.363 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:19:09.363 23:00:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:19:09.363 23:00:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58108 00:19:09.363 23:00:47 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 58108 ']' 00:19:09.363 23:00:47 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 58108 00:19:09.363 23:00:47 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:19:09.363 23:00:47 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:09.363 23:00:47 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58108 00:19:09.363 23:00:47 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:09.363 23:00:47 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:09.363 killing process with pid 58108 00:19:09.363 23:00:47 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58108' 00:19:09.363 23:00:47 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 58108 00:19:09.363 23:00:47 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 58108 00:19:11.280 ************************************ 00:19:11.280 END TEST dpdk_mem_utility 00:19:11.280 ************************************ 00:19:11.280 00:19:11.280 real 0m3.065s 00:19:11.280 user 0m3.069s 00:19:11.280 sys 0m0.404s 00:19:11.280 23:00:49 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:11.280 23:00:49 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:19:11.280 23:00:49 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:19:11.280 23:00:49 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:11.280 23:00:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:11.280 23:00:49 -- common/autotest_common.sh@10 -- # set +x 00:19:11.280 ************************************ 00:19:11.280 START TEST event 00:19:11.280 ************************************ 00:19:11.280 23:00:49 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:19:11.280 * Looking for test storage... 00:19:11.280 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:19:11.280 23:00:49 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:11.280 23:00:49 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:11.280 23:00:49 event -- common/autotest_common.sh@1711 -- # lcov --version 00:19:11.280 23:00:49 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:11.280 23:00:49 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:11.280 23:00:49 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:11.280 23:00:49 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:11.280 23:00:49 event -- scripts/common.sh@336 -- # IFS=.-: 00:19:11.280 23:00:49 event -- scripts/common.sh@336 -- # read -ra ver1 00:19:11.280 23:00:49 event -- scripts/common.sh@337 -- # IFS=.-: 00:19:11.280 23:00:49 event -- scripts/common.sh@337 -- # read -ra ver2 00:19:11.280 23:00:49 event -- scripts/common.sh@338 -- # local 'op=<' 00:19:11.280 23:00:49 event -- scripts/common.sh@340 -- # ver1_l=2 00:19:11.280 23:00:49 event -- scripts/common.sh@341 -- # ver2_l=1 00:19:11.280 23:00:49 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:11.280 23:00:49 event -- scripts/common.sh@344 -- # case "$op" in 00:19:11.280 23:00:49 event -- scripts/common.sh@345 -- # : 1 00:19:11.280 23:00:49 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:11.280 23:00:49 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:11.280 23:00:49 event -- scripts/common.sh@365 -- # decimal 1 00:19:11.280 23:00:49 event -- scripts/common.sh@353 -- # local d=1 00:19:11.280 23:00:49 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:11.280 23:00:49 event -- scripts/common.sh@355 -- # echo 1 00:19:11.280 23:00:49 event -- scripts/common.sh@365 -- # ver1[v]=1 00:19:11.280 23:00:49 event -- scripts/common.sh@366 -- # decimal 2 00:19:11.280 23:00:49 event -- scripts/common.sh@353 -- # local d=2 00:19:11.280 23:00:49 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:11.280 23:00:49 event -- scripts/common.sh@355 -- # echo 2 00:19:11.280 23:00:49 event -- scripts/common.sh@366 -- # ver2[v]=2 00:19:11.280 23:00:49 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:11.280 23:00:49 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:11.280 23:00:49 event -- scripts/common.sh@368 -- # return 0 00:19:11.280 23:00:49 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:11.280 23:00:49 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:11.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:11.280 --rc genhtml_branch_coverage=1 00:19:11.280 --rc genhtml_function_coverage=1 00:19:11.280 --rc genhtml_legend=1 00:19:11.280 --rc geninfo_all_blocks=1 00:19:11.280 --rc geninfo_unexecuted_blocks=1 00:19:11.280 00:19:11.280 ' 00:19:11.280 23:00:49 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:11.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:11.280 --rc genhtml_branch_coverage=1 00:19:11.280 --rc genhtml_function_coverage=1 00:19:11.280 --rc genhtml_legend=1 00:19:11.280 --rc geninfo_all_blocks=1 00:19:11.280 --rc geninfo_unexecuted_blocks=1 00:19:11.280 00:19:11.280 ' 00:19:11.280 23:00:49 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:11.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:11.280 --rc genhtml_branch_coverage=1 00:19:11.280 --rc genhtml_function_coverage=1 00:19:11.280 --rc genhtml_legend=1 00:19:11.280 --rc geninfo_all_blocks=1 00:19:11.280 --rc geninfo_unexecuted_blocks=1 00:19:11.280 00:19:11.280 ' 00:19:11.280 23:00:49 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:11.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:11.280 --rc genhtml_branch_coverage=1 00:19:11.280 --rc genhtml_function_coverage=1 00:19:11.280 --rc genhtml_legend=1 00:19:11.280 --rc geninfo_all_blocks=1 00:19:11.280 --rc geninfo_unexecuted_blocks=1 00:19:11.280 00:19:11.280 ' 00:19:11.280 23:00:49 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:19:11.280 23:00:49 event -- bdev/nbd_common.sh@6 -- # set -e 00:19:11.280 23:00:49 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:19:11.280 23:00:49 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:19:11.280 23:00:49 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:11.280 23:00:49 event -- common/autotest_common.sh@10 -- # set +x 00:19:11.280 ************************************ 00:19:11.280 START TEST event_perf 00:19:11.280 ************************************ 00:19:11.280 23:00:49 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:19:11.280 Running I/O for 1 seconds...[2024-12-09 23:00:49.639595] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:19:11.280 [2024-12-09 23:00:49.639711] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58205 ] 00:19:11.560 [2024-12-09 23:00:49.801627] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:11.560 [2024-12-09 23:00:49.917581] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:11.560 Running I/O for 1 seconds...[2024-12-09 23:00:49.917970] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:11.560 [2024-12-09 23:00:49.918249] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:11.560 [2024-12-09 23:00:49.918259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:12.947 00:19:12.947 lcore 0: 155147 00:19:12.947 lcore 1: 155148 00:19:12.947 lcore 2: 155150 00:19:12.947 lcore 3: 155149 00:19:12.947 done. 00:19:12.947 00:19:12.947 real 0m1.480s 00:19:12.947 user 0m4.254s 00:19:12.947 sys 0m0.091s 00:19:12.947 23:00:51 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:12.947 23:00:51 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:19:12.947 ************************************ 00:19:12.947 END TEST event_perf 00:19:12.947 ************************************ 00:19:12.947 23:00:51 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:19:12.947 23:00:51 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:19:12.947 23:00:51 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:12.947 23:00:51 event -- common/autotest_common.sh@10 -- # set +x 00:19:12.947 ************************************ 00:19:12.947 START TEST event_reactor 00:19:12.947 ************************************ 00:19:12.947 23:00:51 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:19:12.947 [2024-12-09 23:00:51.179883] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:19:12.947 [2024-12-09 23:00:51.179993] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58244 ] 00:19:12.947 [2024-12-09 23:00:51.340258] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:13.210 [2024-12-09 23:00:51.446029] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:14.202 test_start 00:19:14.202 oneshot 00:19:14.202 tick 100 00:19:14.202 tick 100 00:19:14.202 tick 250 00:19:14.202 tick 100 00:19:14.202 tick 100 00:19:14.202 tick 250 00:19:14.202 tick 100 00:19:14.202 tick 500 00:19:14.202 tick 100 00:19:14.202 tick 100 00:19:14.202 tick 250 00:19:14.202 tick 100 00:19:14.202 tick 100 00:19:14.202 test_end 00:19:14.202 00:19:14.202 real 0m1.457s 00:19:14.202 user 0m1.277s 00:19:14.202 sys 0m0.071s 00:19:14.202 23:00:52 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:14.202 23:00:52 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:19:14.202 ************************************ 00:19:14.202 END TEST event_reactor 00:19:14.202 ************************************ 00:19:14.465 23:00:52 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:19:14.465 23:00:52 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:19:14.465 23:00:52 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:14.465 23:00:52 event -- common/autotest_common.sh@10 -- # set +x 00:19:14.465 ************************************ 00:19:14.465 START TEST event_reactor_perf 00:19:14.465 ************************************ 00:19:14.465 23:00:52 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:19:14.465 [2024-12-09 23:00:52.712011] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:19:14.465 [2024-12-09 23:00:52.712136] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58281 ] 00:19:14.465 [2024-12-09 23:00:52.872270] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:14.726 [2024-12-09 23:00:52.985613] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:16.120 test_start 00:19:16.120 test_end 00:19:16.120 Performance: 311914 events per second 00:19:16.120 00:19:16.120 real 0m1.469s 00:19:16.120 user 0m1.292s 00:19:16.120 sys 0m0.065s 00:19:16.120 ************************************ 00:19:16.120 END TEST event_reactor_perf 00:19:16.121 ************************************ 00:19:16.121 23:00:54 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:16.121 23:00:54 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:19:16.121 23:00:54 event -- event/event.sh@49 -- # uname -s 00:19:16.121 23:00:54 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:19:16.121 23:00:54 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:19:16.121 23:00:54 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:16.121 23:00:54 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:16.121 23:00:54 event -- common/autotest_common.sh@10 -- # set +x 00:19:16.121 ************************************ 00:19:16.121 START TEST event_scheduler 00:19:16.121 ************************************ 00:19:16.121 23:00:54 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:19:16.121 * Looking for test storage... 00:19:16.121 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:19:16.121 23:00:54 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:16.121 23:00:54 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:19:16.121 23:00:54 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:16.121 23:00:54 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:16.121 23:00:54 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:16.121 23:00:54 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:16.121 23:00:54 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:16.121 23:00:54 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:19:16.121 23:00:54 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:19:16.121 23:00:54 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:19:16.121 23:00:54 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:19:16.121 23:00:54 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:19:16.121 23:00:54 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:19:16.121 23:00:54 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:19:16.121 23:00:54 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:16.121 23:00:54 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:19:16.121 23:00:54 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:19:16.121 23:00:54 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:16.121 23:00:54 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:16.121 23:00:54 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:19:16.121 23:00:54 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:19:16.121 23:00:54 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:16.121 23:00:54 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:19:16.121 23:00:54 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:19:16.121 23:00:54 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:19:16.121 23:00:54 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:19:16.121 23:00:54 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:16.121 23:00:54 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:19:16.121 23:00:54 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:19:16.121 23:00:54 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:16.121 23:00:54 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:16.121 23:00:54 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:19:16.121 23:00:54 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:16.121 23:00:54 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:16.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:16.121 --rc genhtml_branch_coverage=1 00:19:16.121 --rc genhtml_function_coverage=1 00:19:16.121 --rc genhtml_legend=1 00:19:16.121 --rc geninfo_all_blocks=1 00:19:16.121 --rc geninfo_unexecuted_blocks=1 00:19:16.121 00:19:16.121 ' 00:19:16.121 23:00:54 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:16.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:16.121 --rc genhtml_branch_coverage=1 00:19:16.121 --rc genhtml_function_coverage=1 00:19:16.121 --rc genhtml_legend=1 00:19:16.121 --rc geninfo_all_blocks=1 00:19:16.121 --rc geninfo_unexecuted_blocks=1 00:19:16.121 00:19:16.121 ' 00:19:16.121 23:00:54 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:16.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:16.121 --rc genhtml_branch_coverage=1 00:19:16.121 --rc genhtml_function_coverage=1 00:19:16.121 --rc genhtml_legend=1 00:19:16.121 --rc geninfo_all_blocks=1 00:19:16.121 --rc geninfo_unexecuted_blocks=1 00:19:16.121 00:19:16.121 ' 00:19:16.121 23:00:54 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:16.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:16.121 --rc genhtml_branch_coverage=1 00:19:16.121 --rc genhtml_function_coverage=1 00:19:16.121 --rc genhtml_legend=1 00:19:16.121 --rc geninfo_all_blocks=1 00:19:16.121 --rc geninfo_unexecuted_blocks=1 00:19:16.121 00:19:16.121 ' 00:19:16.121 23:00:54 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:19:16.121 23:00:54 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58351 00:19:16.121 23:00:54 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:19:16.121 23:00:54 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58351 00:19:16.121 23:00:54 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 58351 ']' 00:19:16.121 23:00:54 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:16.121 23:00:54 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:19:16.121 23:00:54 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:16.121 23:00:54 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:16.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:16.121 23:00:54 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:16.121 23:00:54 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:19:16.121 [2024-12-09 23:00:54.439954] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:19:16.121 [2024-12-09 23:00:54.440284] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58351 ] 00:19:16.383 [2024-12-09 23:00:54.604721] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:16.383 [2024-12-09 23:00:54.751161] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:16.383 [2024-12-09 23:00:54.751585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:16.383 [2024-12-09 23:00:54.751917] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:16.383 [2024-12-09 23:00:54.751924] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:16.990 23:00:55 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:16.990 23:00:55 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:19:16.990 23:00:55 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:19:16.991 23:00:55 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.991 23:00:55 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:19:16.991 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:19:16.991 POWER: Cannot set governor of lcore 0 to userspace 00:19:16.991 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:19:16.991 POWER: Cannot set governor of lcore 0 to performance 00:19:16.991 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:19:16.991 POWER: Cannot set governor of lcore 0 to userspace 00:19:16.991 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:19:16.991 POWER: Cannot set governor of lcore 0 to userspace 00:19:16.991 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:19:16.991 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:19:16.991 POWER: Unable to set Power Management Environment for lcore 0 00:19:16.991 [2024-12-09 23:00:55.298114] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:19:16.991 [2024-12-09 23:00:55.298137] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:19:16.991 [2024-12-09 23:00:55.298147] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:19:16.991 [2024-12-09 23:00:55.298163] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:19:16.991 [2024-12-09 23:00:55.298171] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:19:16.991 [2024-12-09 23:00:55.298179] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:19:16.991 23:00:55 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.991 23:00:55 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:19:16.991 23:00:55 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.991 23:00:55 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:19:17.252 [2024-12-09 23:00:55.538940] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:19:17.252 23:00:55 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.252 23:00:55 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:19:17.252 23:00:55 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:17.252 23:00:55 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:17.252 23:00:55 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:19:17.252 ************************************ 00:19:17.252 START TEST scheduler_create_thread 00:19:17.252 ************************************ 00:19:17.252 23:00:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:19:17.252 23:00:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:19:17.252 23:00:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.252 23:00:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:19:17.252 2 00:19:17.252 23:00:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.252 23:00:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:19:17.252 23:00:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.252 23:00:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:19:17.252 3 00:19:17.252 23:00:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.252 23:00:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:19:17.252 23:00:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.252 23:00:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:19:17.252 4 00:19:17.252 23:00:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.252 23:00:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:19:17.252 23:00:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.252 23:00:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:19:17.252 5 00:19:17.252 23:00:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.252 23:00:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:19:17.252 23:00:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.252 23:00:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:19:17.252 6 00:19:17.252 23:00:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.252 23:00:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:19:17.252 23:00:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.252 23:00:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:19:17.252 7 00:19:17.252 23:00:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.252 23:00:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:19:17.252 23:00:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.252 23:00:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:19:17.252 8 00:19:17.253 23:00:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.253 23:00:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:19:17.253 23:00:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.253 23:00:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:19:17.253 9 00:19:17.253 23:00:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.253 23:00:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:19:17.253 23:00:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.253 23:00:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:19:17.253 10 00:19:17.253 23:00:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.253 23:00:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:19:17.253 23:00:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.253 23:00:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:19:17.253 23:00:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.253 23:00:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:19:17.253 23:00:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:19:17.253 23:00:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.253 23:00:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:19:17.253 23:00:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.253 23:00:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:19:17.253 23:00:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.253 23:00:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:19:19.165 23:00:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.165 23:00:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:19:19.165 23:00:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:19:19.165 23:00:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.165 23:00:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:19:19.733 23:00:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.733 00:19:19.733 real 0m2.617s 00:19:19.733 user 0m0.013s 00:19:19.733 sys 0m0.007s 00:19:19.733 23:00:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:19.733 ************************************ 00:19:19.733 END TEST scheduler_create_thread 00:19:19.733 ************************************ 00:19:19.733 23:00:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:19:19.994 23:00:58 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:19:19.994 23:00:58 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58351 00:19:19.994 23:00:58 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 58351 ']' 00:19:19.994 23:00:58 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 58351 00:19:19.994 23:00:58 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:19:19.994 23:00:58 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:19.994 23:00:58 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58351 00:19:19.994 23:00:58 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:19.994 23:00:58 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:19.994 killing process with pid 58351 00:19:19.994 23:00:58 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58351' 00:19:19.994 23:00:58 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 58351 00:19:19.994 23:00:58 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 58351 00:19:20.255 [2024-12-09 23:00:58.646819] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:19:21.196 00:19:21.196 real 0m5.310s 00:19:21.196 user 0m9.145s 00:19:21.196 sys 0m0.398s 00:19:21.196 ************************************ 00:19:21.196 END TEST event_scheduler 00:19:21.196 ************************************ 00:19:21.196 23:00:59 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:21.196 23:00:59 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:19:21.196 23:00:59 event -- event/event.sh@51 -- # modprobe -n nbd 00:19:21.196 23:00:59 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:19:21.196 23:00:59 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:21.196 23:00:59 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:21.196 23:00:59 event -- common/autotest_common.sh@10 -- # set +x 00:19:21.196 ************************************ 00:19:21.196 START TEST app_repeat 00:19:21.196 ************************************ 00:19:21.196 23:00:59 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:19:21.196 23:00:59 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:21.196 23:00:59 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:21.196 23:00:59 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:19:21.196 23:00:59 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:19:21.196 23:00:59 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:19:21.196 23:00:59 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:19:21.196 23:00:59 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:19:21.196 23:00:59 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58463 00:19:21.196 23:00:59 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:19:21.196 Process app_repeat pid: 58463 00:19:21.196 23:00:59 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58463' 00:19:21.196 23:00:59 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:19:21.196 spdk_app_start Round 0 00:19:21.196 23:00:59 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:19:21.196 23:00:59 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:19:21.196 23:00:59 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58463 /var/tmp/spdk-nbd.sock 00:19:21.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:19:21.196 23:00:59 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58463 ']' 00:19:21.196 23:00:59 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:19:21.196 23:00:59 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:21.197 23:00:59 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:19:21.197 23:00:59 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:21.197 23:00:59 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:19:21.490 [2024-12-09 23:00:59.659956] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:19:21.490 [2024-12-09 23:00:59.660148] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58463 ] 00:19:21.490 [2024-12-09 23:00:59.830198] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:21.752 [2024-12-09 23:00:59.982954] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:21.752 [2024-12-09 23:00:59.983097] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:22.323 23:01:00 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:22.323 23:01:00 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:19:22.323 23:01:00 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:19:22.584 Malloc0 00:19:22.584 23:01:00 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:19:22.845 Malloc1 00:19:22.845 23:01:01 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:19:22.845 23:01:01 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:22.845 23:01:01 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:19:22.845 23:01:01 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:19:22.845 23:01:01 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:22.845 23:01:01 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:19:22.845 23:01:01 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:19:22.845 23:01:01 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:22.845 23:01:01 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:19:22.845 23:01:01 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:22.845 23:01:01 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:22.845 23:01:01 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:22.845 23:01:01 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:19:22.845 23:01:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:22.845 23:01:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:22.845 23:01:01 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:19:23.106 /dev/nbd0 00:19:23.106 23:01:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:23.106 23:01:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:23.106 23:01:01 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:23.106 23:01:01 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:19:23.106 23:01:01 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:23.106 23:01:01 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:23.106 23:01:01 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:23.106 23:01:01 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:19:23.106 23:01:01 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:23.106 23:01:01 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:23.106 23:01:01 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:19:23.106 1+0 records in 00:19:23.106 1+0 records out 00:19:23.106 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000457514 s, 9.0 MB/s 00:19:23.106 23:01:01 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:19:23.106 23:01:01 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:19:23.106 23:01:01 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:19:23.106 23:01:01 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:23.106 23:01:01 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:19:23.106 23:01:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:23.106 23:01:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:23.106 23:01:01 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:19:23.367 /dev/nbd1 00:19:23.367 23:01:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:23.367 23:01:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:23.367 23:01:01 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:19:23.367 23:01:01 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:19:23.367 23:01:01 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:23.367 23:01:01 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:23.367 23:01:01 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:19:23.367 23:01:01 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:19:23.367 23:01:01 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:23.367 23:01:01 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:23.367 23:01:01 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:19:23.367 1+0 records in 00:19:23.367 1+0 records out 00:19:23.367 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000530763 s, 7.7 MB/s 00:19:23.367 23:01:01 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:19:23.367 23:01:01 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:19:23.367 23:01:01 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:19:23.367 23:01:01 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:23.367 23:01:01 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:19:23.367 23:01:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:23.367 23:01:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:23.367 23:01:01 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:23.367 23:01:01 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:23.367 23:01:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:23.628 23:01:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:19:23.628 { 00:19:23.628 "nbd_device": "/dev/nbd0", 00:19:23.628 "bdev_name": "Malloc0" 00:19:23.628 }, 00:19:23.628 { 00:19:23.628 "nbd_device": "/dev/nbd1", 00:19:23.628 "bdev_name": "Malloc1" 00:19:23.628 } 00:19:23.628 ]' 00:19:23.628 23:01:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:23.628 23:01:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:19:23.628 { 00:19:23.628 "nbd_device": "/dev/nbd0", 00:19:23.628 "bdev_name": "Malloc0" 00:19:23.628 }, 00:19:23.628 { 00:19:23.628 "nbd_device": "/dev/nbd1", 00:19:23.628 "bdev_name": "Malloc1" 00:19:23.628 } 00:19:23.628 ]' 00:19:23.628 23:01:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:19:23.628 /dev/nbd1' 00:19:23.628 23:01:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:19:23.628 /dev/nbd1' 00:19:23.628 23:01:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:23.628 23:01:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:19:23.628 23:01:01 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:19:23.628 23:01:01 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:19:23.628 23:01:01 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:19:23.628 23:01:01 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:19:23.628 23:01:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:23.628 23:01:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:19:23.628 23:01:01 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:19:23.628 23:01:01 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:19:23.628 23:01:01 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:19:23.628 23:01:01 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:19:23.628 256+0 records in 00:19:23.628 256+0 records out 00:19:23.628 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00705382 s, 149 MB/s 00:19:23.628 23:01:01 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:19:23.628 23:01:01 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:19:23.628 256+0 records in 00:19:23.628 256+0 records out 00:19:23.628 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0210184 s, 49.9 MB/s 00:19:23.628 23:01:01 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:19:23.628 23:01:01 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:19:23.628 256+0 records in 00:19:23.628 256+0 records out 00:19:23.628 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0241175 s, 43.5 MB/s 00:19:23.628 23:01:02 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:19:23.628 23:01:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:23.628 23:01:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:19:23.628 23:01:02 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:19:23.628 23:01:02 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:19:23.628 23:01:02 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:19:23.628 23:01:02 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:19:23.628 23:01:02 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:19:23.628 23:01:02 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:19:23.628 23:01:02 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:19:23.628 23:01:02 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:19:23.628 23:01:02 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:19:23.628 23:01:02 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:19:23.628 23:01:02 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:23.628 23:01:02 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:23.628 23:01:02 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:23.628 23:01:02 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:19:23.628 23:01:02 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:23.628 23:01:02 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:23.897 23:01:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:23.897 23:01:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:23.897 23:01:02 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:23.897 23:01:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:23.897 23:01:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:23.897 23:01:02 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:23.897 23:01:02 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:19:23.897 23:01:02 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:19:23.897 23:01:02 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:23.897 23:01:02 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:19:24.159 23:01:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:24.159 23:01:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:24.159 23:01:02 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:24.159 23:01:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:24.159 23:01:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:24.159 23:01:02 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:24.159 23:01:02 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:19:24.159 23:01:02 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:19:24.159 23:01:02 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:24.159 23:01:02 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:24.159 23:01:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:24.419 23:01:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:19:24.419 23:01:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:19:24.419 23:01:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:24.419 23:01:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:19:24.419 23:01:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:19:24.419 23:01:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:24.419 23:01:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:19:24.419 23:01:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:19:24.419 23:01:02 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:19:24.419 23:01:02 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:19:24.419 23:01:02 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:19:24.419 23:01:02 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:19:24.419 23:01:02 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:19:24.990 23:01:03 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:19:25.561 [2024-12-09 23:01:04.007054] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:25.821 [2024-12-09 23:01:04.142810] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:25.821 [2024-12-09 23:01:04.142941] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:26.082 [2024-12-09 23:01:04.287684] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:19:26.082 [2024-12-09 23:01:04.287783] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:19:27.996 spdk_app_start Round 1 00:19:27.996 23:01:06 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:19:27.996 23:01:06 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:19:27.996 23:01:06 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58463 /var/tmp/spdk-nbd.sock 00:19:27.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:19:27.996 23:01:06 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58463 ']' 00:19:27.996 23:01:06 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:19:27.996 23:01:06 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:27.996 23:01:06 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:19:27.996 23:01:06 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:27.996 23:01:06 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:19:27.996 23:01:06 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:27.996 23:01:06 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:19:27.996 23:01:06 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:19:28.257 Malloc0 00:19:28.257 23:01:06 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:19:28.518 Malloc1 00:19:28.791 23:01:06 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:19:28.791 23:01:06 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:28.791 23:01:06 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:19:28.791 23:01:06 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:19:28.791 23:01:06 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:28.791 23:01:06 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:19:28.791 23:01:06 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:19:28.791 23:01:06 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:28.791 23:01:06 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:19:28.791 23:01:06 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:28.791 23:01:06 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:28.791 23:01:06 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:28.791 23:01:06 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:19:28.791 23:01:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:28.791 23:01:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:28.791 23:01:06 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:19:28.791 /dev/nbd0 00:19:28.791 23:01:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:28.791 23:01:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:28.791 23:01:07 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:28.791 23:01:07 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:19:28.791 23:01:07 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:28.791 23:01:07 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:28.791 23:01:07 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:28.791 23:01:07 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:19:28.791 23:01:07 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:28.791 23:01:07 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:28.791 23:01:07 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:19:29.054 1+0 records in 00:19:29.054 1+0 records out 00:19:29.054 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000503546 s, 8.1 MB/s 00:19:29.054 23:01:07 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:19:29.054 23:01:07 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:19:29.054 23:01:07 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:19:29.054 23:01:07 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:29.054 23:01:07 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:19:29.054 23:01:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:29.054 23:01:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:29.054 23:01:07 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:19:29.054 /dev/nbd1 00:19:29.054 23:01:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:29.316 23:01:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:29.316 23:01:07 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:19:29.316 23:01:07 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:19:29.316 23:01:07 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:29.316 23:01:07 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:29.316 23:01:07 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:19:29.316 23:01:07 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:19:29.316 23:01:07 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:29.316 23:01:07 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:29.316 23:01:07 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:19:29.316 1+0 records in 00:19:29.316 1+0 records out 00:19:29.316 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000354141 s, 11.6 MB/s 00:19:29.316 23:01:07 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:19:29.316 23:01:07 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:19:29.316 23:01:07 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:19:29.316 23:01:07 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:29.316 23:01:07 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:19:29.316 23:01:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:29.316 23:01:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:29.316 23:01:07 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:29.316 23:01:07 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:29.316 23:01:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:29.316 23:01:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:19:29.316 { 00:19:29.316 "nbd_device": "/dev/nbd0", 00:19:29.316 "bdev_name": "Malloc0" 00:19:29.316 }, 00:19:29.316 { 00:19:29.316 "nbd_device": "/dev/nbd1", 00:19:29.316 "bdev_name": "Malloc1" 00:19:29.316 } 00:19:29.316 ]' 00:19:29.316 23:01:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:29.316 23:01:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:19:29.316 { 00:19:29.316 "nbd_device": "/dev/nbd0", 00:19:29.316 "bdev_name": "Malloc0" 00:19:29.316 }, 00:19:29.316 { 00:19:29.316 "nbd_device": "/dev/nbd1", 00:19:29.316 "bdev_name": "Malloc1" 00:19:29.316 } 00:19:29.316 ]' 00:19:29.576 23:01:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:19:29.576 /dev/nbd1' 00:19:29.576 23:01:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:19:29.576 /dev/nbd1' 00:19:29.576 23:01:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:29.576 23:01:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:19:29.576 23:01:07 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:19:29.576 23:01:07 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:19:29.576 23:01:07 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:19:29.576 23:01:07 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:19:29.576 23:01:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:29.576 23:01:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:19:29.576 23:01:07 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:19:29.576 23:01:07 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:19:29.576 23:01:07 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:19:29.576 23:01:07 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:19:29.576 256+0 records in 00:19:29.576 256+0 records out 00:19:29.576 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00777967 s, 135 MB/s 00:19:29.576 23:01:07 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:19:29.576 23:01:07 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:19:29.576 256+0 records in 00:19:29.576 256+0 records out 00:19:29.576 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0474287 s, 22.1 MB/s 00:19:29.576 23:01:07 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:19:29.576 23:01:07 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:19:29.576 256+0 records in 00:19:29.576 256+0 records out 00:19:29.576 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0312822 s, 33.5 MB/s 00:19:29.576 23:01:07 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:19:29.576 23:01:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:29.576 23:01:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:19:29.576 23:01:07 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:19:29.576 23:01:07 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:19:29.576 23:01:07 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:19:29.576 23:01:07 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:19:29.576 23:01:07 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:19:29.576 23:01:07 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:19:29.576 23:01:07 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:19:29.576 23:01:07 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:19:29.576 23:01:07 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:19:29.576 23:01:07 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:19:29.576 23:01:07 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:29.576 23:01:07 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:29.576 23:01:07 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:29.576 23:01:07 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:19:29.576 23:01:07 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:29.576 23:01:07 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:29.837 23:01:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:29.837 23:01:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:29.837 23:01:08 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:29.837 23:01:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:29.837 23:01:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:29.837 23:01:08 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:29.837 23:01:08 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:19:29.837 23:01:08 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:19:29.837 23:01:08 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:29.837 23:01:08 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:19:30.098 23:01:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:30.098 23:01:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:30.098 23:01:08 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:30.098 23:01:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:30.098 23:01:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:30.098 23:01:08 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:30.098 23:01:08 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:19:30.098 23:01:08 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:19:30.098 23:01:08 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:30.098 23:01:08 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:30.098 23:01:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:30.359 23:01:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:19:30.359 23:01:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:19:30.359 23:01:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:30.359 23:01:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:19:30.359 23:01:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:30.359 23:01:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:19:30.359 23:01:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:19:30.359 23:01:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:19:30.359 23:01:08 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:19:30.359 23:01:08 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:19:30.359 23:01:08 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:19:30.359 23:01:08 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:19:30.359 23:01:08 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:19:30.930 23:01:09 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:19:31.872 [2024-12-09 23:01:10.023842] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:31.872 [2024-12-09 23:01:10.178215] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:31.872 [2024-12-09 23:01:10.178442] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:31.872 [2024-12-09 23:01:10.327956] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:19:31.872 [2024-12-09 23:01:10.328059] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:19:33.793 spdk_app_start Round 2 00:19:33.793 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:19:33.794 23:01:12 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:19:33.794 23:01:12 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:19:33.794 23:01:12 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58463 /var/tmp/spdk-nbd.sock 00:19:33.794 23:01:12 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58463 ']' 00:19:33.794 23:01:12 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:19:33.794 23:01:12 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:33.794 23:01:12 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:19:33.794 23:01:12 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:33.794 23:01:12 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:19:34.058 23:01:12 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:34.058 23:01:12 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:19:34.058 23:01:12 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:19:34.327 Malloc0 00:19:34.327 23:01:12 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:19:34.589 Malloc1 00:19:34.589 23:01:12 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:19:34.589 23:01:12 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:34.589 23:01:12 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:19:34.589 23:01:12 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:19:34.589 23:01:12 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:34.589 23:01:12 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:19:34.589 23:01:12 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:19:34.589 23:01:12 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:34.589 23:01:12 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:19:34.589 23:01:12 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:34.589 23:01:12 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:34.589 23:01:12 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:34.589 23:01:12 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:19:34.589 23:01:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:34.589 23:01:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:34.589 23:01:12 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:19:34.851 /dev/nbd0 00:19:34.851 23:01:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:34.851 23:01:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:34.851 23:01:13 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:34.851 23:01:13 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:19:34.851 23:01:13 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:34.851 23:01:13 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:34.851 23:01:13 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:34.851 23:01:13 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:19:34.851 23:01:13 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:34.851 23:01:13 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:34.851 23:01:13 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:19:34.851 1+0 records in 00:19:34.851 1+0 records out 00:19:34.851 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000798903 s, 5.1 MB/s 00:19:34.851 23:01:13 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:19:34.851 23:01:13 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:19:34.851 23:01:13 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:19:34.851 23:01:13 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:34.851 23:01:13 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:19:34.851 23:01:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:34.851 23:01:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:34.851 23:01:13 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:19:35.113 /dev/nbd1 00:19:35.113 23:01:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:35.113 23:01:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:35.113 23:01:13 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:19:35.113 23:01:13 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:19:35.113 23:01:13 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:35.113 23:01:13 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:35.113 23:01:13 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:19:35.113 23:01:13 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:19:35.113 23:01:13 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:35.113 23:01:13 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:35.113 23:01:13 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:19:35.113 1+0 records in 00:19:35.113 1+0 records out 00:19:35.113 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000393398 s, 10.4 MB/s 00:19:35.113 23:01:13 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:19:35.113 23:01:13 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:19:35.113 23:01:13 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:19:35.113 23:01:13 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:35.113 23:01:13 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:19:35.113 23:01:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:35.113 23:01:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:35.113 23:01:13 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:35.113 23:01:13 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:35.113 23:01:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:35.374 23:01:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:19:35.374 { 00:19:35.374 "nbd_device": "/dev/nbd0", 00:19:35.374 "bdev_name": "Malloc0" 00:19:35.374 }, 00:19:35.374 { 00:19:35.374 "nbd_device": "/dev/nbd1", 00:19:35.374 "bdev_name": "Malloc1" 00:19:35.374 } 00:19:35.374 ]' 00:19:35.374 23:01:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:19:35.374 { 00:19:35.374 "nbd_device": "/dev/nbd0", 00:19:35.374 "bdev_name": "Malloc0" 00:19:35.374 }, 00:19:35.374 { 00:19:35.374 "nbd_device": "/dev/nbd1", 00:19:35.374 "bdev_name": "Malloc1" 00:19:35.374 } 00:19:35.374 ]' 00:19:35.374 23:01:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:35.374 23:01:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:19:35.374 /dev/nbd1' 00:19:35.374 23:01:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:35.374 23:01:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:19:35.374 /dev/nbd1' 00:19:35.374 23:01:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:19:35.374 23:01:13 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:19:35.374 23:01:13 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:19:35.374 23:01:13 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:19:35.374 23:01:13 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:19:35.374 23:01:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:35.374 23:01:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:19:35.375 23:01:13 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:19:35.375 23:01:13 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:19:35.375 23:01:13 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:19:35.375 23:01:13 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:19:35.375 256+0 records in 00:19:35.375 256+0 records out 00:19:35.375 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00831752 s, 126 MB/s 00:19:35.375 23:01:13 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:19:35.375 23:01:13 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:19:35.375 256+0 records in 00:19:35.375 256+0 records out 00:19:35.375 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0476505 s, 22.0 MB/s 00:19:35.375 23:01:13 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:19:35.375 23:01:13 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:19:35.639 256+0 records in 00:19:35.639 256+0 records out 00:19:35.639 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.112674 s, 9.3 MB/s 00:19:35.639 23:01:13 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:19:35.639 23:01:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:35.639 23:01:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:19:35.639 23:01:13 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:19:35.639 23:01:13 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:19:35.639 23:01:13 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:19:35.639 23:01:13 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:19:35.639 23:01:13 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:19:35.639 23:01:13 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:19:35.639 23:01:13 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:19:35.639 23:01:13 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:19:35.639 23:01:13 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:19:35.639 23:01:13 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:19:35.639 23:01:13 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:35.639 23:01:13 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:35.639 23:01:13 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:35.639 23:01:13 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:19:35.639 23:01:13 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:35.639 23:01:13 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:35.902 23:01:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:35.902 23:01:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:35.902 23:01:14 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:35.902 23:01:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:35.902 23:01:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:35.902 23:01:14 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:35.902 23:01:14 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:19:35.902 23:01:14 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:19:35.902 23:01:14 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:35.902 23:01:14 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:19:36.163 23:01:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:36.163 23:01:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:36.163 23:01:14 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:36.164 23:01:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:36.164 23:01:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:36.164 23:01:14 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:36.164 23:01:14 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:19:36.164 23:01:14 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:19:36.164 23:01:14 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:36.164 23:01:14 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:36.164 23:01:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:36.426 23:01:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:19:36.426 23:01:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:19:36.426 23:01:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:36.426 23:01:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:19:36.426 23:01:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:19:36.426 23:01:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:36.426 23:01:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:19:36.426 23:01:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:19:36.426 23:01:14 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:19:36.426 23:01:14 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:19:36.426 23:01:14 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:19:36.426 23:01:14 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:19:36.426 23:01:14 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:19:36.686 23:01:15 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:19:37.625 [2024-12-09 23:01:15.920454] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:37.625 [2024-12-09 23:01:16.062303] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:37.625 [2024-12-09 23:01:16.062493] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:37.887 [2024-12-09 23:01:16.220945] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:19:37.887 [2024-12-09 23:01:16.221073] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:19:39.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:19:39.831 23:01:18 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58463 /var/tmp/spdk-nbd.sock 00:19:39.831 23:01:18 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58463 ']' 00:19:39.831 23:01:18 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:19:39.831 23:01:18 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:39.831 23:01:18 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:19:39.831 23:01:18 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:39.831 23:01:18 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:19:40.092 23:01:18 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:40.092 23:01:18 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:19:40.092 23:01:18 event.app_repeat -- event/event.sh@39 -- # killprocess 58463 00:19:40.092 23:01:18 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 58463 ']' 00:19:40.092 23:01:18 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 58463 00:19:40.093 23:01:18 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:19:40.093 23:01:18 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:40.093 23:01:18 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58463 00:19:40.093 killing process with pid 58463 00:19:40.093 23:01:18 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:40.093 23:01:18 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:40.093 23:01:18 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58463' 00:19:40.093 23:01:18 event.app_repeat -- common/autotest_common.sh@973 -- # kill 58463 00:19:40.093 23:01:18 event.app_repeat -- common/autotest_common.sh@978 -- # wait 58463 00:19:40.662 spdk_app_start is called in Round 0. 00:19:40.662 Shutdown signal received, stop current app iteration 00:19:40.662 Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 reinitialization... 00:19:40.662 spdk_app_start is called in Round 1. 00:19:40.662 Shutdown signal received, stop current app iteration 00:19:40.662 Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 reinitialization... 00:19:40.662 spdk_app_start is called in Round 2. 00:19:40.662 Shutdown signal received, stop current app iteration 00:19:40.662 Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 reinitialization... 00:19:40.662 spdk_app_start is called in Round 3. 00:19:40.662 Shutdown signal received, stop current app iteration 00:19:40.662 ************************************ 00:19:40.662 END TEST app_repeat 00:19:40.662 ************************************ 00:19:40.662 23:01:19 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:19:40.662 23:01:19 event.app_repeat -- event/event.sh@42 -- # return 0 00:19:40.662 00:19:40.662 real 0m19.478s 00:19:40.662 user 0m41.977s 00:19:40.662 sys 0m2.818s 00:19:40.662 23:01:19 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:40.662 23:01:19 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:19:40.923 23:01:19 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:19:40.923 23:01:19 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:19:40.923 23:01:19 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:40.923 23:01:19 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:40.923 23:01:19 event -- common/autotest_common.sh@10 -- # set +x 00:19:40.923 ************************************ 00:19:40.923 START TEST cpu_locks 00:19:40.923 ************************************ 00:19:40.923 23:01:19 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:19:40.923 * Looking for test storage... 00:19:40.923 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:19:40.923 23:01:19 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:40.923 23:01:19 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:19:40.923 23:01:19 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:40.923 23:01:19 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:40.923 23:01:19 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:40.923 23:01:19 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:40.923 23:01:19 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:40.923 23:01:19 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:19:40.923 23:01:19 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:19:40.923 23:01:19 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:19:40.923 23:01:19 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:19:40.923 23:01:19 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:19:40.923 23:01:19 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:19:40.923 23:01:19 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:19:40.923 23:01:19 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:40.923 23:01:19 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:19:40.923 23:01:19 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:19:40.923 23:01:19 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:40.923 23:01:19 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:40.923 23:01:19 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:19:40.923 23:01:19 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:19:40.923 23:01:19 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:40.923 23:01:19 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:19:40.923 23:01:19 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:19:40.923 23:01:19 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:19:40.923 23:01:19 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:19:40.923 23:01:19 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:40.923 23:01:19 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:19:40.923 23:01:19 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:19:40.923 23:01:19 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:40.923 23:01:19 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:40.923 23:01:19 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:19:40.923 23:01:19 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:40.923 23:01:19 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:40.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:40.923 --rc genhtml_branch_coverage=1 00:19:40.923 --rc genhtml_function_coverage=1 00:19:40.923 --rc genhtml_legend=1 00:19:40.923 --rc geninfo_all_blocks=1 00:19:40.923 --rc geninfo_unexecuted_blocks=1 00:19:40.923 00:19:40.923 ' 00:19:40.923 23:01:19 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:40.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:40.923 --rc genhtml_branch_coverage=1 00:19:40.923 --rc genhtml_function_coverage=1 00:19:40.923 --rc genhtml_legend=1 00:19:40.923 --rc geninfo_all_blocks=1 00:19:40.923 --rc geninfo_unexecuted_blocks=1 00:19:40.923 00:19:40.923 ' 00:19:40.923 23:01:19 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:40.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:40.923 --rc genhtml_branch_coverage=1 00:19:40.923 --rc genhtml_function_coverage=1 00:19:40.923 --rc genhtml_legend=1 00:19:40.923 --rc geninfo_all_blocks=1 00:19:40.923 --rc geninfo_unexecuted_blocks=1 00:19:40.923 00:19:40.923 ' 00:19:40.923 23:01:19 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:40.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:40.923 --rc genhtml_branch_coverage=1 00:19:40.923 --rc genhtml_function_coverage=1 00:19:40.923 --rc genhtml_legend=1 00:19:40.923 --rc geninfo_all_blocks=1 00:19:40.923 --rc geninfo_unexecuted_blocks=1 00:19:40.923 00:19:40.923 ' 00:19:40.923 23:01:19 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:19:40.923 23:01:19 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:19:40.923 23:01:19 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:19:40.923 23:01:19 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:19:40.923 23:01:19 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:40.923 23:01:19 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:40.923 23:01:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:19:40.923 ************************************ 00:19:40.923 START TEST default_locks 00:19:40.923 ************************************ 00:19:40.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:40.923 23:01:19 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:19:40.923 23:01:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58910 00:19:40.923 23:01:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58910 00:19:40.924 23:01:19 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58910 ']' 00:19:40.924 23:01:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:19:40.924 23:01:19 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:40.924 23:01:19 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:40.924 23:01:19 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:40.924 23:01:19 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:40.924 23:01:19 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:19:41.184 [2024-12-09 23:01:19.432282] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:19:41.184 [2024-12-09 23:01:19.432679] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58910 ] 00:19:41.184 [2024-12-09 23:01:19.598824] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:41.444 [2024-12-09 23:01:19.778482] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:42.382 23:01:20 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:42.382 23:01:20 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:19:42.382 23:01:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58910 00:19:42.382 23:01:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:19:42.382 23:01:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58910 00:19:42.382 23:01:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58910 00:19:42.382 23:01:20 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 58910 ']' 00:19:42.382 23:01:20 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 58910 00:19:42.642 23:01:20 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:19:42.642 23:01:20 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:42.642 23:01:20 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58910 00:19:42.642 killing process with pid 58910 00:19:42.642 23:01:20 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:42.642 23:01:20 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:42.642 23:01:20 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58910' 00:19:42.642 23:01:20 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 58910 00:19:42.642 23:01:20 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 58910 00:19:44.553 23:01:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58910 00:19:44.553 23:01:22 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:19:44.553 23:01:22 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58910 00:19:44.553 23:01:22 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:19:44.553 23:01:22 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:44.553 23:01:22 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:19:44.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:44.553 23:01:22 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:44.553 23:01:22 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 58910 00:19:44.553 23:01:22 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58910 ']' 00:19:44.553 23:01:22 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:44.553 23:01:22 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:44.553 23:01:22 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:44.553 23:01:22 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:44.553 23:01:22 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:19:44.553 ERROR: process (pid: 58910) is no longer running 00:19:44.553 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58910) - No such process 00:19:44.553 23:01:22 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:44.553 23:01:22 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:19:44.553 23:01:22 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:19:44.553 23:01:22 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:44.553 23:01:22 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:44.553 23:01:22 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:44.553 23:01:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:19:44.553 23:01:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:19:44.553 23:01:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:19:44.553 23:01:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:19:44.553 00:19:44.553 real 0m3.381s 00:19:44.553 user 0m3.138s 00:19:44.553 sys 0m0.712s 00:19:44.553 23:01:22 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:44.553 23:01:22 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:19:44.553 ************************************ 00:19:44.553 END TEST default_locks 00:19:44.553 ************************************ 00:19:44.553 23:01:22 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:19:44.553 23:01:22 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:44.553 23:01:22 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:44.553 23:01:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:19:44.553 ************************************ 00:19:44.553 START TEST default_locks_via_rpc 00:19:44.553 ************************************ 00:19:44.553 23:01:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:19:44.553 23:01:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58974 00:19:44.553 23:01:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58974 00:19:44.554 23:01:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58974 ']' 00:19:44.554 23:01:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:44.554 23:01:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:44.554 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:44.554 23:01:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:19:44.554 23:01:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:44.554 23:01:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:44.554 23:01:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:44.554 [2024-12-09 23:01:22.896992] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:19:44.554 [2024-12-09 23:01:22.897185] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58974 ] 00:19:44.814 [2024-12-09 23:01:23.065535] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:44.814 [2024-12-09 23:01:23.209295] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:45.804 23:01:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:45.804 23:01:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:19:45.804 23:01:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:19:45.804 23:01:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.804 23:01:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:45.804 23:01:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.804 23:01:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:19:45.804 23:01:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:19:45.804 23:01:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:19:45.804 23:01:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:19:45.804 23:01:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:19:45.804 23:01:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.804 23:01:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:45.804 23:01:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.804 23:01:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58974 00:19:45.804 23:01:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58974 00:19:45.804 23:01:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:19:45.804 23:01:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58974 00:19:45.804 23:01:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 58974 ']' 00:19:45.804 23:01:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 58974 00:19:45.804 23:01:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:19:45.804 23:01:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:45.804 23:01:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58974 00:19:45.804 killing process with pid 58974 00:19:45.804 23:01:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:45.804 23:01:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:45.804 23:01:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58974' 00:19:45.804 23:01:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 58974 00:19:45.804 23:01:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 58974 00:19:47.722 ************************************ 00:19:47.722 END TEST default_locks_via_rpc 00:19:47.722 ************************************ 00:19:47.722 00:19:47.722 real 0m3.204s 00:19:47.722 user 0m3.081s 00:19:47.722 sys 0m0.653s 00:19:47.722 23:01:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:47.722 23:01:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:47.722 23:01:26 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:19:47.722 23:01:26 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:47.722 23:01:26 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:47.722 23:01:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:19:47.723 ************************************ 00:19:47.723 START TEST non_locking_app_on_locked_coremask 00:19:47.723 ************************************ 00:19:47.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:47.723 23:01:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:19:47.723 23:01:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=59037 00:19:47.723 23:01:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 59037 /var/tmp/spdk.sock 00:19:47.723 23:01:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59037 ']' 00:19:47.723 23:01:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:19:47.723 23:01:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:47.723 23:01:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:47.723 23:01:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:47.723 23:01:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:47.723 23:01:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:19:48.013 [2024-12-09 23:01:26.183654] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:19:48.013 [2024-12-09 23:01:26.184156] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59037 ] 00:19:48.014 [2024-12-09 23:01:26.358473] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:48.275 [2024-12-09 23:01:26.598039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:49.214 23:01:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:49.214 23:01:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:19:49.214 23:01:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:19:49.214 23:01:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=59059 00:19:49.214 23:01:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 59059 /var/tmp/spdk2.sock 00:19:49.214 23:01:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59059 ']' 00:19:49.214 23:01:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:19:49.214 23:01:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:49.214 23:01:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:19:49.214 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:19:49.214 23:01:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:49.214 23:01:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:19:49.214 [2024-12-09 23:01:27.543085] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:19:49.214 [2024-12-09 23:01:27.543268] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59059 ] 00:19:49.476 [2024-12-09 23:01:27.730009] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:19:49.476 [2024-12-09 23:01:27.730130] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:49.738 [2024-12-09 23:01:28.078945] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:51.728 23:01:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:51.728 23:01:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:19:51.728 23:01:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 59037 00:19:51.728 23:01:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59037 00:19:51.728 23:01:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:19:52.299 23:01:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 59037 00:19:52.299 23:01:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59037 ']' 00:19:52.299 23:01:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59037 00:19:52.299 23:01:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:19:52.299 23:01:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:52.299 23:01:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59037 00:19:52.299 killing process with pid 59037 00:19:52.299 23:01:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:52.299 23:01:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:52.300 23:01:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59037' 00:19:52.300 23:01:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59037 00:19:52.300 23:01:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59037 00:19:56.554 23:01:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 59059 00:19:56.554 23:01:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59059 ']' 00:19:56.554 23:01:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59059 00:19:56.554 23:01:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:19:56.554 23:01:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:56.554 23:01:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59059 00:19:56.554 killing process with pid 59059 00:19:56.554 23:01:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:56.555 23:01:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:56.555 23:01:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59059' 00:19:56.555 23:01:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59059 00:19:56.555 23:01:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59059 00:19:57.943 00:19:57.943 real 0m10.313s 00:19:57.943 user 0m10.371s 00:19:57.943 sys 0m1.415s 00:19:57.943 23:01:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:57.943 23:01:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:19:57.943 ************************************ 00:19:57.943 END TEST non_locking_app_on_locked_coremask 00:19:57.943 ************************************ 00:19:58.207 23:01:36 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:19:58.207 23:01:36 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:58.207 23:01:36 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:58.207 23:01:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:19:58.207 ************************************ 00:19:58.207 START TEST locking_app_on_unlocked_coremask 00:19:58.207 ************************************ 00:19:58.207 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:58.207 23:01:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:19:58.207 23:01:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59190 00:19:58.207 23:01:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59190 /var/tmp/spdk.sock 00:19:58.207 23:01:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59190 ']' 00:19:58.207 23:01:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:58.207 23:01:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:58.207 23:01:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:19:58.207 23:01:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:58.207 23:01:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:58.207 23:01:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:19:58.207 [2024-12-09 23:01:36.555504] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:19:58.207 [2024-12-09 23:01:36.555666] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59190 ] 00:19:58.469 [2024-12-09 23:01:36.721554] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:19:58.469 [2024-12-09 23:01:36.721649] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:58.469 [2024-12-09 23:01:36.899848] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:59.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:19:59.412 23:01:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:59.412 23:01:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:19:59.412 23:01:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59212 00:19:59.412 23:01:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59212 /var/tmp/spdk2.sock 00:19:59.412 23:01:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59212 ']' 00:19:59.412 23:01:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:19:59.412 23:01:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:59.412 23:01:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:19:59.412 23:01:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:59.412 23:01:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:19:59.412 23:01:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:19:59.412 [2024-12-09 23:01:37.861283] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:19:59.412 [2024-12-09 23:01:37.861483] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59212 ] 00:19:59.674 [2024-12-09 23:01:38.046892] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:59.937 [2024-12-09 23:01:38.384893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:02.499 23:01:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:02.499 23:01:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:20:02.499 23:01:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59212 00:20:02.499 23:01:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59212 00:20:02.499 23:01:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:20:02.763 23:01:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59190 00:20:02.763 23:01:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59190 ']' 00:20:02.763 23:01:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59190 00:20:02.763 23:01:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:20:02.763 23:01:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:02.763 23:01:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59190 00:20:02.763 killing process with pid 59190 00:20:02.763 23:01:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:02.763 23:01:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:02.763 23:01:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59190' 00:20:02.763 23:01:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59190 00:20:02.763 23:01:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59190 00:20:06.964 23:01:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59212 00:20:06.964 23:01:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59212 ']' 00:20:06.964 23:01:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59212 00:20:06.964 23:01:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:20:06.964 23:01:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:06.964 23:01:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59212 00:20:06.964 killing process with pid 59212 00:20:06.964 23:01:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:06.964 23:01:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:06.964 23:01:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59212' 00:20:06.964 23:01:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59212 00:20:06.964 23:01:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59212 00:20:08.878 ************************************ 00:20:08.878 END TEST locking_app_on_unlocked_coremask 00:20:08.878 ************************************ 00:20:08.878 00:20:08.878 real 0m10.417s 00:20:08.878 user 0m10.575s 00:20:08.878 sys 0m1.451s 00:20:08.878 23:01:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:08.878 23:01:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:20:08.878 23:01:46 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:20:08.878 23:01:46 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:08.878 23:01:46 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:08.878 23:01:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:20:08.878 ************************************ 00:20:08.878 START TEST locking_app_on_locked_coremask 00:20:08.878 ************************************ 00:20:08.878 23:01:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:20:08.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:08.878 23:01:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59349 00:20:08.878 23:01:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59349 /var/tmp/spdk.sock 00:20:08.878 23:01:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59349 ']' 00:20:08.878 23:01:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:08.878 23:01:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:08.878 23:01:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:20:08.878 23:01:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:08.878 23:01:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:08.878 23:01:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:20:08.878 [2024-12-09 23:01:47.084821] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:20:08.878 [2024-12-09 23:01:47.085537] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59349 ] 00:20:08.878 [2024-12-09 23:01:47.275898] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:09.139 [2024-12-09 23:01:47.447145] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:10.083 23:01:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:10.083 23:01:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:20:10.083 23:01:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59365 00:20:10.083 23:01:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59365 /var/tmp/spdk2.sock 00:20:10.083 23:01:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:20:10.083 23:01:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:20:10.083 23:01:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59365 /var/tmp/spdk2.sock 00:20:10.083 23:01:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:20:10.083 23:01:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:10.083 23:01:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:20:10.083 23:01:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:10.083 23:01:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59365 /var/tmp/spdk2.sock 00:20:10.083 23:01:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59365 ']' 00:20:10.083 23:01:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:20:10.083 23:01:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:10.083 23:01:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:20:10.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:20:10.083 23:01:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:10.083 23:01:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:20:10.083 [2024-12-09 23:01:48.416114] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:20:10.083 [2024-12-09 23:01:48.416551] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59365 ] 00:20:10.343 [2024-12-09 23:01:48.607159] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59349 has claimed it. 00:20:10.343 [2024-12-09 23:01:48.607303] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:20:10.601 ERROR: process (pid: 59365) is no longer running 00:20:10.601 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59365) - No such process 00:20:10.601 23:01:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:10.601 23:01:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:20:10.601 23:01:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:20:10.601 23:01:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:10.601 23:01:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:10.601 23:01:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:10.601 23:01:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59349 00:20:10.601 23:01:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59349 00:20:10.860 23:01:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:20:10.860 23:01:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59349 00:20:10.860 23:01:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59349 ']' 00:20:10.860 23:01:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59349 00:20:10.860 23:01:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:20:10.860 23:01:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:10.860 23:01:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59349 00:20:10.860 killing process with pid 59349 00:20:10.860 23:01:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:10.860 23:01:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:10.860 23:01:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59349' 00:20:10.860 23:01:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59349 00:20:10.860 23:01:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59349 00:20:12.766 00:20:12.766 real 0m4.272s 00:20:12.766 user 0m4.204s 00:20:12.766 sys 0m0.905s 00:20:12.766 23:01:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:12.766 23:01:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:20:12.766 ************************************ 00:20:12.766 END TEST locking_app_on_locked_coremask 00:20:12.766 ************************************ 00:20:13.026 23:01:51 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:20:13.026 23:01:51 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:13.026 23:01:51 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:13.026 23:01:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:20:13.026 ************************************ 00:20:13.026 START TEST locking_overlapped_coremask 00:20:13.026 ************************************ 00:20:13.026 23:01:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:20:13.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:13.026 23:01:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59429 00:20:13.026 23:01:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59429 /var/tmp/spdk.sock 00:20:13.026 23:01:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59429 ']' 00:20:13.026 23:01:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:13.026 23:01:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:13.026 23:01:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:13.026 23:01:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:13.026 23:01:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:20:13.026 23:01:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:20:13.026 [2024-12-09 23:01:51.395257] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:20:13.026 [2024-12-09 23:01:51.395426] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59429 ] 00:20:13.286 [2024-12-09 23:01:51.559736] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:13.286 [2024-12-09 23:01:51.708004] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:13.286 [2024-12-09 23:01:51.708328] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:13.286 [2024-12-09 23:01:51.708424] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:14.274 23:01:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:14.274 23:01:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:20:14.274 23:01:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59447 00:20:14.274 23:01:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59447 /var/tmp/spdk2.sock 00:20:14.274 23:01:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:20:14.274 23:01:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:20:14.274 23:01:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59447 /var/tmp/spdk2.sock 00:20:14.274 23:01:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:20:14.274 23:01:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:14.274 23:01:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:20:14.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:20:14.274 23:01:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:14.274 23:01:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59447 /var/tmp/spdk2.sock 00:20:14.274 23:01:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59447 ']' 00:20:14.274 23:01:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:20:14.274 23:01:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:14.274 23:01:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:20:14.274 23:01:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:14.274 23:01:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:20:14.274 [2024-12-09 23:01:52.709288] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:20:14.274 [2024-12-09 23:01:52.709458] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59447 ] 00:20:14.535 [2024-12-09 23:01:52.892518] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59429 has claimed it. 00:20:14.536 [2024-12-09 23:01:52.892615] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:20:15.107 ERROR: process (pid: 59447) is no longer running 00:20:15.107 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59447) - No such process 00:20:15.107 23:01:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:15.107 23:01:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:20:15.107 23:01:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:20:15.107 23:01:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:15.107 23:01:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:15.107 23:01:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:15.107 23:01:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:20:15.107 23:01:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:20:15.107 23:01:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:20:15.107 23:01:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:20:15.107 23:01:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59429 00:20:15.107 23:01:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 59429 ']' 00:20:15.107 23:01:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 59429 00:20:15.107 23:01:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:20:15.107 23:01:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:15.107 23:01:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59429 00:20:15.107 killing process with pid 59429 00:20:15.107 23:01:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:15.107 23:01:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:15.107 23:01:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59429' 00:20:15.107 23:01:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 59429 00:20:15.107 23:01:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 59429 00:20:17.017 ************************************ 00:20:17.017 END TEST locking_overlapped_coremask 00:20:17.017 ************************************ 00:20:17.017 00:20:17.017 real 0m4.150s 00:20:17.017 user 0m10.998s 00:20:17.017 sys 0m0.721s 00:20:17.017 23:01:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:17.017 23:01:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:20:17.312 23:01:55 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:20:17.312 23:01:55 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:17.312 23:01:55 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:17.312 23:01:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:20:17.312 ************************************ 00:20:17.312 START TEST locking_overlapped_coremask_via_rpc 00:20:17.312 ************************************ 00:20:17.312 23:01:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:20:17.312 23:01:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59511 00:20:17.312 23:01:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59511 /var/tmp/spdk.sock 00:20:17.312 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:17.312 23:01:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59511 ']' 00:20:17.312 23:01:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:20:17.312 23:01:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:17.312 23:01:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:17.312 23:01:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:17.312 23:01:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:17.312 23:01:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:17.312 [2024-12-09 23:01:55.615034] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:20:17.312 [2024-12-09 23:01:55.615196] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59511 ] 00:20:17.574 [2024-12-09 23:01:55.783278] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:20:17.574 [2024-12-09 23:01:55.783430] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:17.574 [2024-12-09 23:01:55.959540] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:17.574 [2024-12-09 23:01:55.960100] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:17.574 [2024-12-09 23:01:55.960183] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:18.517 23:01:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:18.517 23:01:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:20:18.517 23:01:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59529 00:20:18.517 23:01:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59529 /var/tmp/spdk2.sock 00:20:18.517 23:01:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59529 ']' 00:20:18.517 23:01:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:20:18.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:20:18.517 23:01:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:18.517 23:01:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:20:18.517 23:01:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:18.517 23:01:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:20:18.517 23:01:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:18.517 [2024-12-09 23:01:56.969940] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:20:18.517 [2024-12-09 23:01:56.970192] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59529 ] 00:20:18.778 [2024-12-09 23:01:57.161705] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:20:18.778 [2024-12-09 23:01:57.161799] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:19.175 [2024-12-09 23:01:57.460848] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:19.175 [2024-12-09 23:01:57.464543] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:19.175 [2024-12-09 23:01:57.464581] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:20:21.721 23:01:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:21.721 23:01:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:20:21.721 23:01:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:20:21.721 23:01:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.721 23:01:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:21.721 23:01:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.721 23:01:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:20:21.721 23:01:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:20:21.721 23:01:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:20:21.721 23:01:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:21.721 23:01:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:21.721 23:01:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:21.721 23:01:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:21.721 23:01:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:20:21.721 23:01:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.721 23:01:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:21.721 [2024-12-09 23:01:59.608493] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59511 has claimed it. 00:20:21.721 request: 00:20:21.721 { 00:20:21.721 "method": "framework_enable_cpumask_locks", 00:20:21.721 "req_id": 1 00:20:21.721 } 00:20:21.721 Got JSON-RPC error response 00:20:21.721 response: 00:20:21.721 { 00:20:21.721 "code": -32603, 00:20:21.721 "message": "Failed to claim CPU core: 2" 00:20:21.721 } 00:20:21.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:21.721 23:01:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:21.721 23:01:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:20:21.721 23:01:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:21.721 23:01:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:21.721 23:01:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:21.721 23:01:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59511 /var/tmp/spdk.sock 00:20:21.721 23:01:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59511 ']' 00:20:21.721 23:01:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:21.721 23:01:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:21.721 23:01:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:21.721 23:01:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:21.721 23:01:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:21.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:20:21.721 23:01:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:21.721 23:01:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:20:21.721 23:01:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59529 /var/tmp/spdk2.sock 00:20:21.721 23:01:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59529 ']' 00:20:21.721 23:01:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:20:21.721 23:01:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:21.721 23:01:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:20:21.721 23:01:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:21.721 23:01:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:21.721 ************************************ 00:20:21.721 END TEST locking_overlapped_coremask_via_rpc 00:20:21.721 ************************************ 00:20:21.721 23:02:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:21.721 23:02:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:20:21.721 23:02:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:20:21.721 23:02:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:20:21.721 23:02:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:20:21.721 23:02:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:20:21.721 00:20:21.721 real 0m4.562s 00:20:21.721 user 0m1.425s 00:20:21.721 sys 0m0.213s 00:20:21.721 23:02:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:21.721 23:02:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:21.721 23:02:00 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:20:21.721 23:02:00 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59511 ]] 00:20:21.721 23:02:00 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59511 00:20:21.721 23:02:00 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59511 ']' 00:20:21.721 23:02:00 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59511 00:20:21.721 23:02:00 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:20:21.721 23:02:00 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:21.722 23:02:00 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59511 00:20:21.722 killing process with pid 59511 00:20:21.722 23:02:00 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:21.722 23:02:00 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:21.722 23:02:00 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59511' 00:20:21.722 23:02:00 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59511 00:20:21.722 23:02:00 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59511 00:20:24.268 23:02:02 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59529 ]] 00:20:24.268 23:02:02 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59529 00:20:24.268 23:02:02 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59529 ']' 00:20:24.268 23:02:02 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59529 00:20:24.268 23:02:02 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:20:24.268 23:02:02 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:24.268 23:02:02 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59529 00:20:24.268 killing process with pid 59529 00:20:24.268 23:02:02 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:24.268 23:02:02 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:24.268 23:02:02 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59529' 00:20:24.268 23:02:02 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59529 00:20:24.268 23:02:02 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59529 00:20:25.651 23:02:04 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:20:25.651 Process with pid 59511 is not found 00:20:25.651 Process with pid 59529 is not found 00:20:25.651 23:02:04 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:20:25.651 23:02:04 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59511 ]] 00:20:25.651 23:02:04 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59511 00:20:25.651 23:02:04 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59511 ']' 00:20:25.651 23:02:04 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59511 00:20:25.651 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59511) - No such process 00:20:25.651 23:02:04 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59511 is not found' 00:20:25.651 23:02:04 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59529 ]] 00:20:25.651 23:02:04 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59529 00:20:25.651 23:02:04 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59529 ']' 00:20:25.651 23:02:04 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59529 00:20:25.651 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59529) - No such process 00:20:25.651 23:02:04 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59529 is not found' 00:20:25.651 23:02:04 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:20:25.651 00:20:25.651 real 0m44.949s 00:20:25.651 user 1m16.410s 00:20:25.651 sys 0m7.430s 00:20:25.651 ************************************ 00:20:25.651 END TEST cpu_locks 00:20:25.651 ************************************ 00:20:25.651 23:02:04 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:25.651 23:02:04 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:20:25.912 ************************************ 00:20:25.912 END TEST event 00:20:25.912 ************************************ 00:20:25.912 00:20:25.912 real 1m14.700s 00:20:25.912 user 2m14.522s 00:20:25.912 sys 0m11.123s 00:20:25.912 23:02:04 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:25.912 23:02:04 event -- common/autotest_common.sh@10 -- # set +x 00:20:25.912 23:02:04 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:20:25.912 23:02:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:25.912 23:02:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:25.912 23:02:04 -- common/autotest_common.sh@10 -- # set +x 00:20:25.912 ************************************ 00:20:25.912 START TEST thread 00:20:25.912 ************************************ 00:20:25.912 23:02:04 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:20:25.912 * Looking for test storage... 00:20:25.912 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:20:25.912 23:02:04 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:25.912 23:02:04 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:20:25.912 23:02:04 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:25.912 23:02:04 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:25.912 23:02:04 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:25.912 23:02:04 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:25.912 23:02:04 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:25.912 23:02:04 thread -- scripts/common.sh@336 -- # IFS=.-: 00:20:25.912 23:02:04 thread -- scripts/common.sh@336 -- # read -ra ver1 00:20:25.912 23:02:04 thread -- scripts/common.sh@337 -- # IFS=.-: 00:20:25.912 23:02:04 thread -- scripts/common.sh@337 -- # read -ra ver2 00:20:25.912 23:02:04 thread -- scripts/common.sh@338 -- # local 'op=<' 00:20:25.912 23:02:04 thread -- scripts/common.sh@340 -- # ver1_l=2 00:20:25.912 23:02:04 thread -- scripts/common.sh@341 -- # ver2_l=1 00:20:25.912 23:02:04 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:25.912 23:02:04 thread -- scripts/common.sh@344 -- # case "$op" in 00:20:25.912 23:02:04 thread -- scripts/common.sh@345 -- # : 1 00:20:25.912 23:02:04 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:25.912 23:02:04 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:25.912 23:02:04 thread -- scripts/common.sh@365 -- # decimal 1 00:20:25.912 23:02:04 thread -- scripts/common.sh@353 -- # local d=1 00:20:25.912 23:02:04 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:25.912 23:02:04 thread -- scripts/common.sh@355 -- # echo 1 00:20:25.912 23:02:04 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:20:25.912 23:02:04 thread -- scripts/common.sh@366 -- # decimal 2 00:20:25.912 23:02:04 thread -- scripts/common.sh@353 -- # local d=2 00:20:25.912 23:02:04 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:25.912 23:02:04 thread -- scripts/common.sh@355 -- # echo 2 00:20:26.178 23:02:04 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:20:26.178 23:02:04 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:26.178 23:02:04 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:26.178 23:02:04 thread -- scripts/common.sh@368 -- # return 0 00:20:26.178 23:02:04 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:26.178 23:02:04 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:26.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:26.178 --rc genhtml_branch_coverage=1 00:20:26.178 --rc genhtml_function_coverage=1 00:20:26.178 --rc genhtml_legend=1 00:20:26.178 --rc geninfo_all_blocks=1 00:20:26.178 --rc geninfo_unexecuted_blocks=1 00:20:26.178 00:20:26.178 ' 00:20:26.178 23:02:04 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:26.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:26.178 --rc genhtml_branch_coverage=1 00:20:26.178 --rc genhtml_function_coverage=1 00:20:26.178 --rc genhtml_legend=1 00:20:26.178 --rc geninfo_all_blocks=1 00:20:26.178 --rc geninfo_unexecuted_blocks=1 00:20:26.178 00:20:26.178 ' 00:20:26.178 23:02:04 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:26.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:26.178 --rc genhtml_branch_coverage=1 00:20:26.178 --rc genhtml_function_coverage=1 00:20:26.178 --rc genhtml_legend=1 00:20:26.178 --rc geninfo_all_blocks=1 00:20:26.178 --rc geninfo_unexecuted_blocks=1 00:20:26.178 00:20:26.178 ' 00:20:26.178 23:02:04 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:26.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:26.178 --rc genhtml_branch_coverage=1 00:20:26.178 --rc genhtml_function_coverage=1 00:20:26.178 --rc genhtml_legend=1 00:20:26.178 --rc geninfo_all_blocks=1 00:20:26.178 --rc geninfo_unexecuted_blocks=1 00:20:26.178 00:20:26.178 ' 00:20:26.178 23:02:04 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:20:26.178 23:02:04 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:20:26.178 23:02:04 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:26.178 23:02:04 thread -- common/autotest_common.sh@10 -- # set +x 00:20:26.178 ************************************ 00:20:26.178 START TEST thread_poller_perf 00:20:26.178 ************************************ 00:20:26.178 23:02:04 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:20:26.178 [2024-12-09 23:02:04.426159] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:20:26.178 [2024-12-09 23:02:04.426554] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59724 ] 00:20:26.178 [2024-12-09 23:02:04.590530] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:26.439 [2024-12-09 23:02:04.758298] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:26.439 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:20:27.849 [2024-12-09T23:02:06.311Z] ====================================== 00:20:27.849 [2024-12-09T23:02:06.311Z] busy:2616967194 (cyc) 00:20:27.849 [2024-12-09T23:02:06.311Z] total_run_count: 306000 00:20:27.849 [2024-12-09T23:02:06.311Z] tsc_hz: 2600000000 (cyc) 00:20:27.849 [2024-12-09T23:02:06.311Z] ====================================== 00:20:27.849 [2024-12-09T23:02:06.311Z] poller_cost: 8552 (cyc), 3289 (nsec) 00:20:27.849 00:20:27.849 ************************************ 00:20:27.849 END TEST thread_poller_perf 00:20:27.849 ************************************ 00:20:27.849 real 0m1.588s 00:20:27.849 user 0m1.368s 00:20:27.849 sys 0m0.108s 00:20:27.849 23:02:05 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:27.849 23:02:05 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:20:27.849 23:02:06 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:20:27.849 23:02:06 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:20:27.849 23:02:06 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:27.849 23:02:06 thread -- common/autotest_common.sh@10 -- # set +x 00:20:27.849 ************************************ 00:20:27.849 START TEST thread_poller_perf 00:20:27.849 ************************************ 00:20:27.849 23:02:06 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:20:27.849 [2024-12-09 23:02:06.089263] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:20:27.849 [2024-12-09 23:02:06.089583] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59755 ] 00:20:27.849 [2024-12-09 23:02:06.255945] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:28.110 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:20:28.110 [2024-12-09 23:02:06.426017] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:29.514 [2024-12-09T23:02:07.976Z] ====================================== 00:20:29.514 [2024-12-09T23:02:07.976Z] busy:2605251598 (cyc) 00:20:29.514 [2024-12-09T23:02:07.976Z] total_run_count: 3515000 00:20:29.514 [2024-12-09T23:02:07.976Z] tsc_hz: 2600000000 (cyc) 00:20:29.514 [2024-12-09T23:02:07.976Z] ====================================== 00:20:29.514 [2024-12-09T23:02:07.976Z] poller_cost: 741 (cyc), 285 (nsec) 00:20:29.514 00:20:29.514 real 0m1.606s 00:20:29.514 user 0m1.399s 00:20:29.514 sys 0m0.092s 00:20:29.514 23:02:07 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:29.514 23:02:07 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:20:29.514 ************************************ 00:20:29.514 END TEST thread_poller_perf 00:20:29.514 ************************************ 00:20:29.514 23:02:07 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:20:29.514 ************************************ 00:20:29.514 END TEST thread 00:20:29.514 ************************************ 00:20:29.514 00:20:29.514 real 0m3.508s 00:20:29.514 user 0m2.884s 00:20:29.514 sys 0m0.344s 00:20:29.514 23:02:07 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:29.514 23:02:07 thread -- common/autotest_common.sh@10 -- # set +x 00:20:29.514 23:02:07 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:20:29.514 23:02:07 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:20:29.514 23:02:07 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:29.514 23:02:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:29.514 23:02:07 -- common/autotest_common.sh@10 -- # set +x 00:20:29.514 ************************************ 00:20:29.514 START TEST app_cmdline 00:20:29.514 ************************************ 00:20:29.514 23:02:07 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:20:29.514 * Looking for test storage... 00:20:29.514 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:20:29.514 23:02:07 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:29.514 23:02:07 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:20:29.514 23:02:07 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:29.514 23:02:07 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:29.514 23:02:07 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:29.514 23:02:07 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:29.514 23:02:07 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:29.514 23:02:07 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:20:29.514 23:02:07 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:20:29.514 23:02:07 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:20:29.514 23:02:07 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:20:29.514 23:02:07 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:20:29.514 23:02:07 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:20:29.514 23:02:07 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:20:29.514 23:02:07 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:29.514 23:02:07 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:20:29.514 23:02:07 app_cmdline -- scripts/common.sh@345 -- # : 1 00:20:29.514 23:02:07 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:29.514 23:02:07 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:29.514 23:02:07 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:20:29.514 23:02:07 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:20:29.514 23:02:07 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:29.514 23:02:07 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:20:29.514 23:02:07 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:20:29.514 23:02:07 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:20:29.514 23:02:07 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:20:29.514 23:02:07 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:29.515 23:02:07 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:20:29.515 23:02:07 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:20:29.515 23:02:07 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:29.515 23:02:07 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:29.515 23:02:07 app_cmdline -- scripts/common.sh@368 -- # return 0 00:20:29.515 23:02:07 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:29.515 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:29.515 23:02:07 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:29.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:29.515 --rc genhtml_branch_coverage=1 00:20:29.515 --rc genhtml_function_coverage=1 00:20:29.515 --rc genhtml_legend=1 00:20:29.515 --rc geninfo_all_blocks=1 00:20:29.515 --rc geninfo_unexecuted_blocks=1 00:20:29.515 00:20:29.515 ' 00:20:29.515 23:02:07 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:29.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:29.515 --rc genhtml_branch_coverage=1 00:20:29.515 --rc genhtml_function_coverage=1 00:20:29.515 --rc genhtml_legend=1 00:20:29.515 --rc geninfo_all_blocks=1 00:20:29.515 --rc geninfo_unexecuted_blocks=1 00:20:29.515 00:20:29.515 ' 00:20:29.515 23:02:07 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:29.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:29.515 --rc genhtml_branch_coverage=1 00:20:29.515 --rc genhtml_function_coverage=1 00:20:29.515 --rc genhtml_legend=1 00:20:29.515 --rc geninfo_all_blocks=1 00:20:29.515 --rc geninfo_unexecuted_blocks=1 00:20:29.515 00:20:29.515 ' 00:20:29.515 23:02:07 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:29.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:29.515 --rc genhtml_branch_coverage=1 00:20:29.515 --rc genhtml_function_coverage=1 00:20:29.515 --rc genhtml_legend=1 00:20:29.515 --rc geninfo_all_blocks=1 00:20:29.515 --rc geninfo_unexecuted_blocks=1 00:20:29.515 00:20:29.515 ' 00:20:29.515 23:02:07 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:20:29.515 23:02:07 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59844 00:20:29.515 23:02:07 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59844 00:20:29.515 23:02:07 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 59844 ']' 00:20:29.515 23:02:07 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:29.515 23:02:07 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:29.515 23:02:07 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:20:29.515 23:02:07 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:29.515 23:02:07 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:29.515 23:02:07 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:20:29.780 [2024-12-09 23:02:08.062471] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:20:29.780 [2024-12-09 23:02:08.062905] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59844 ] 00:20:29.780 [2024-12-09 23:02:08.228820] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:30.041 [2024-12-09 23:02:08.373251] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:30.982 23:02:09 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:30.982 23:02:09 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:20:30.982 23:02:09 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:20:30.982 { 00:20:30.982 "version": "SPDK v25.01-pre git sha1 1ae735a5d", 00:20:30.982 "fields": { 00:20:30.982 "major": 25, 00:20:30.982 "minor": 1, 00:20:30.982 "patch": 0, 00:20:30.982 "suffix": "-pre", 00:20:30.982 "commit": "1ae735a5d" 00:20:30.982 } 00:20:30.982 } 00:20:30.982 23:02:09 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:20:30.982 23:02:09 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:20:30.982 23:02:09 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:20:30.982 23:02:09 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:20:30.982 23:02:09 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:20:30.982 23:02:09 app_cmdline -- app/cmdline.sh@26 -- # sort 00:20:30.982 23:02:09 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.982 23:02:09 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:20:30.982 23:02:09 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:20:30.982 23:02:09 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.982 23:02:09 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:20:30.982 23:02:09 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:20:30.982 23:02:09 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:20:30.982 23:02:09 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:20:30.982 23:02:09 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:20:30.982 23:02:09 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:30.982 23:02:09 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:30.982 23:02:09 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:30.982 23:02:09 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:30.982 23:02:09 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:30.982 23:02:09 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:30.982 23:02:09 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:30.982 23:02:09 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:20:30.982 23:02:09 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:20:31.242 request: 00:20:31.242 { 00:20:31.242 "method": "env_dpdk_get_mem_stats", 00:20:31.242 "req_id": 1 00:20:31.242 } 00:20:31.242 Got JSON-RPC error response 00:20:31.242 response: 00:20:31.242 { 00:20:31.242 "code": -32601, 00:20:31.242 "message": "Method not found" 00:20:31.242 } 00:20:31.242 23:02:09 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:20:31.242 23:02:09 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:31.242 23:02:09 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:31.242 23:02:09 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:31.242 23:02:09 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59844 00:20:31.242 23:02:09 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 59844 ']' 00:20:31.242 23:02:09 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 59844 00:20:31.242 23:02:09 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:20:31.242 23:02:09 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:31.242 23:02:09 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59844 00:20:31.242 killing process with pid 59844 00:20:31.242 23:02:09 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:31.242 23:02:09 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:31.242 23:02:09 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59844' 00:20:31.242 23:02:09 app_cmdline -- common/autotest_common.sh@973 -- # kill 59844 00:20:31.242 23:02:09 app_cmdline -- common/autotest_common.sh@978 -- # wait 59844 00:20:33.157 ************************************ 00:20:33.157 END TEST app_cmdline 00:20:33.157 ************************************ 00:20:33.157 00:20:33.157 real 0m3.787s 00:20:33.157 user 0m3.969s 00:20:33.157 sys 0m0.637s 00:20:33.157 23:02:11 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:33.157 23:02:11 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:20:33.418 23:02:11 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:20:33.418 23:02:11 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:33.418 23:02:11 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:33.418 23:02:11 -- common/autotest_common.sh@10 -- # set +x 00:20:33.418 ************************************ 00:20:33.418 START TEST version 00:20:33.418 ************************************ 00:20:33.418 23:02:11 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:20:33.418 * Looking for test storage... 00:20:33.418 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:20:33.418 23:02:11 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:33.418 23:02:11 version -- common/autotest_common.sh@1711 -- # lcov --version 00:20:33.418 23:02:11 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:33.418 23:02:11 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:33.418 23:02:11 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:33.418 23:02:11 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:33.418 23:02:11 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:33.418 23:02:11 version -- scripts/common.sh@336 -- # IFS=.-: 00:20:33.418 23:02:11 version -- scripts/common.sh@336 -- # read -ra ver1 00:20:33.418 23:02:11 version -- scripts/common.sh@337 -- # IFS=.-: 00:20:33.418 23:02:11 version -- scripts/common.sh@337 -- # read -ra ver2 00:20:33.418 23:02:11 version -- scripts/common.sh@338 -- # local 'op=<' 00:20:33.418 23:02:11 version -- scripts/common.sh@340 -- # ver1_l=2 00:20:33.418 23:02:11 version -- scripts/common.sh@341 -- # ver2_l=1 00:20:33.419 23:02:11 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:33.419 23:02:11 version -- scripts/common.sh@344 -- # case "$op" in 00:20:33.419 23:02:11 version -- scripts/common.sh@345 -- # : 1 00:20:33.419 23:02:11 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:33.419 23:02:11 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:33.419 23:02:11 version -- scripts/common.sh@365 -- # decimal 1 00:20:33.419 23:02:11 version -- scripts/common.sh@353 -- # local d=1 00:20:33.419 23:02:11 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:33.419 23:02:11 version -- scripts/common.sh@355 -- # echo 1 00:20:33.419 23:02:11 version -- scripts/common.sh@365 -- # ver1[v]=1 00:20:33.419 23:02:11 version -- scripts/common.sh@366 -- # decimal 2 00:20:33.419 23:02:11 version -- scripts/common.sh@353 -- # local d=2 00:20:33.419 23:02:11 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:33.419 23:02:11 version -- scripts/common.sh@355 -- # echo 2 00:20:33.419 23:02:11 version -- scripts/common.sh@366 -- # ver2[v]=2 00:20:33.419 23:02:11 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:33.419 23:02:11 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:33.419 23:02:11 version -- scripts/common.sh@368 -- # return 0 00:20:33.419 23:02:11 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:33.419 23:02:11 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:33.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:33.419 --rc genhtml_branch_coverage=1 00:20:33.419 --rc genhtml_function_coverage=1 00:20:33.419 --rc genhtml_legend=1 00:20:33.419 --rc geninfo_all_blocks=1 00:20:33.419 --rc geninfo_unexecuted_blocks=1 00:20:33.419 00:20:33.419 ' 00:20:33.419 23:02:11 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:33.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:33.419 --rc genhtml_branch_coverage=1 00:20:33.419 --rc genhtml_function_coverage=1 00:20:33.419 --rc genhtml_legend=1 00:20:33.419 --rc geninfo_all_blocks=1 00:20:33.419 --rc geninfo_unexecuted_blocks=1 00:20:33.419 00:20:33.419 ' 00:20:33.419 23:02:11 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:33.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:33.419 --rc genhtml_branch_coverage=1 00:20:33.419 --rc genhtml_function_coverage=1 00:20:33.419 --rc genhtml_legend=1 00:20:33.419 --rc geninfo_all_blocks=1 00:20:33.419 --rc geninfo_unexecuted_blocks=1 00:20:33.419 00:20:33.419 ' 00:20:33.419 23:02:11 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:33.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:33.419 --rc genhtml_branch_coverage=1 00:20:33.419 --rc genhtml_function_coverage=1 00:20:33.419 --rc genhtml_legend=1 00:20:33.419 --rc geninfo_all_blocks=1 00:20:33.419 --rc geninfo_unexecuted_blocks=1 00:20:33.419 00:20:33.419 ' 00:20:33.419 23:02:11 version -- app/version.sh@17 -- # get_header_version major 00:20:33.419 23:02:11 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:20:33.419 23:02:11 version -- app/version.sh@14 -- # tr -d '"' 00:20:33.419 23:02:11 version -- app/version.sh@14 -- # cut -f2 00:20:33.419 23:02:11 version -- app/version.sh@17 -- # major=25 00:20:33.419 23:02:11 version -- app/version.sh@18 -- # get_header_version minor 00:20:33.419 23:02:11 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:20:33.419 23:02:11 version -- app/version.sh@14 -- # cut -f2 00:20:33.419 23:02:11 version -- app/version.sh@14 -- # tr -d '"' 00:20:33.419 23:02:11 version -- app/version.sh@18 -- # minor=1 00:20:33.419 23:02:11 version -- app/version.sh@19 -- # get_header_version patch 00:20:33.419 23:02:11 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:20:33.419 23:02:11 version -- app/version.sh@14 -- # cut -f2 00:20:33.419 23:02:11 version -- app/version.sh@14 -- # tr -d '"' 00:20:33.419 23:02:11 version -- app/version.sh@19 -- # patch=0 00:20:33.419 23:02:11 version -- app/version.sh@20 -- # get_header_version suffix 00:20:33.419 23:02:11 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:20:33.419 23:02:11 version -- app/version.sh@14 -- # cut -f2 00:20:33.419 23:02:11 version -- app/version.sh@14 -- # tr -d '"' 00:20:33.419 23:02:11 version -- app/version.sh@20 -- # suffix=-pre 00:20:33.419 23:02:11 version -- app/version.sh@22 -- # version=25.1 00:20:33.419 23:02:11 version -- app/version.sh@25 -- # (( patch != 0 )) 00:20:33.419 23:02:11 version -- app/version.sh@28 -- # version=25.1rc0 00:20:33.419 23:02:11 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:20:33.419 23:02:11 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:20:33.419 23:02:11 version -- app/version.sh@30 -- # py_version=25.1rc0 00:20:33.419 23:02:11 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:20:33.419 ************************************ 00:20:33.419 END TEST version 00:20:33.419 ************************************ 00:20:33.419 00:20:33.419 real 0m0.217s 00:20:33.419 user 0m0.136s 00:20:33.419 sys 0m0.107s 00:20:33.419 23:02:11 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:33.419 23:02:11 version -- common/autotest_common.sh@10 -- # set +x 00:20:33.727 23:02:11 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:20:33.727 23:02:11 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:20:33.727 23:02:11 -- spdk/autotest.sh@194 -- # uname -s 00:20:33.727 23:02:11 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:20:33.727 23:02:11 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:20:33.727 23:02:11 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:20:33.727 23:02:11 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:20:33.727 23:02:11 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:20:33.727 23:02:11 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:33.727 23:02:11 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:33.727 23:02:11 -- common/autotest_common.sh@10 -- # set +x 00:20:33.727 ************************************ 00:20:33.727 START TEST blockdev_nvme 00:20:33.727 ************************************ 00:20:33.727 23:02:11 blockdev_nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:20:33.727 * Looking for test storage... 00:20:33.727 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:20:33.727 23:02:12 blockdev_nvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:33.727 23:02:12 blockdev_nvme -- common/autotest_common.sh@1711 -- # lcov --version 00:20:33.727 23:02:12 blockdev_nvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:33.727 23:02:12 blockdev_nvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:33.727 23:02:12 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:33.727 23:02:12 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:33.727 23:02:12 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:33.727 23:02:12 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:20:33.727 23:02:12 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:20:33.727 23:02:12 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:20:33.727 23:02:12 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:20:33.727 23:02:12 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:20:33.727 23:02:12 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:20:33.727 23:02:12 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:20:33.727 23:02:12 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:33.727 23:02:12 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:20:33.727 23:02:12 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:20:33.727 23:02:12 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:33.727 23:02:12 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:33.727 23:02:12 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:20:33.727 23:02:12 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:20:33.727 23:02:12 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:33.727 23:02:12 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:20:33.727 23:02:12 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:20:33.727 23:02:12 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:20:33.727 23:02:12 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:20:33.727 23:02:12 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:33.727 23:02:12 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:20:33.727 23:02:12 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:20:33.727 23:02:12 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:33.727 23:02:12 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:33.727 23:02:12 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:20:33.727 23:02:12 blockdev_nvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:33.727 23:02:12 blockdev_nvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:33.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:33.727 --rc genhtml_branch_coverage=1 00:20:33.727 --rc genhtml_function_coverage=1 00:20:33.727 --rc genhtml_legend=1 00:20:33.727 --rc geninfo_all_blocks=1 00:20:33.727 --rc geninfo_unexecuted_blocks=1 00:20:33.727 00:20:33.727 ' 00:20:33.727 23:02:12 blockdev_nvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:33.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:33.727 --rc genhtml_branch_coverage=1 00:20:33.727 --rc genhtml_function_coverage=1 00:20:33.727 --rc genhtml_legend=1 00:20:33.727 --rc geninfo_all_blocks=1 00:20:33.727 --rc geninfo_unexecuted_blocks=1 00:20:33.727 00:20:33.727 ' 00:20:33.728 23:02:12 blockdev_nvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:33.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:33.728 --rc genhtml_branch_coverage=1 00:20:33.728 --rc genhtml_function_coverage=1 00:20:33.728 --rc genhtml_legend=1 00:20:33.728 --rc geninfo_all_blocks=1 00:20:33.728 --rc geninfo_unexecuted_blocks=1 00:20:33.728 00:20:33.728 ' 00:20:33.728 23:02:12 blockdev_nvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:33.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:33.728 --rc genhtml_branch_coverage=1 00:20:33.728 --rc genhtml_function_coverage=1 00:20:33.728 --rc genhtml_legend=1 00:20:33.728 --rc geninfo_all_blocks=1 00:20:33.728 --rc geninfo_unexecuted_blocks=1 00:20:33.728 00:20:33.728 ' 00:20:33.728 23:02:12 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:20:33.728 23:02:12 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:20:33.728 23:02:12 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:20:33.728 23:02:12 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:20:33.728 23:02:12 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:20:33.728 23:02:12 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:20:33.728 23:02:12 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:20:33.728 23:02:12 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:20:33.728 23:02:12 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:20:33.728 23:02:12 blockdev_nvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:20:33.728 23:02:12 blockdev_nvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:20:33.728 23:02:12 blockdev_nvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:20:33.728 23:02:12 blockdev_nvme -- bdev/blockdev.sh@711 -- # uname -s 00:20:33.728 23:02:12 blockdev_nvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:20:33.728 23:02:12 blockdev_nvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:20:33.728 23:02:12 blockdev_nvme -- bdev/blockdev.sh@719 -- # test_type=nvme 00:20:33.728 23:02:12 blockdev_nvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:20:33.728 23:02:12 blockdev_nvme -- bdev/blockdev.sh@721 -- # dek= 00:20:33.728 23:02:12 blockdev_nvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:20:33.728 23:02:12 blockdev_nvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:20:33.728 23:02:12 blockdev_nvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:20:33.728 23:02:12 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == bdev ]] 00:20:33.728 23:02:12 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == crypto_* ]] 00:20:33.728 23:02:12 blockdev_nvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:20:33.728 23:02:12 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=60027 00:20:33.728 23:02:12 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:20:33.728 23:02:12 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 60027 00:20:33.728 23:02:12 blockdev_nvme -- common/autotest_common.sh@835 -- # '[' -z 60027 ']' 00:20:33.728 23:02:12 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:20:33.728 23:02:12 blockdev_nvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:33.728 23:02:12 blockdev_nvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:33.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:33.728 23:02:12 blockdev_nvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:33.728 23:02:12 blockdev_nvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:33.728 23:02:12 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:20:34.005 [2024-12-09 23:02:12.216589] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:20:34.005 [2024-12-09 23:02:12.217013] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60027 ] 00:20:34.005 [2024-12-09 23:02:12.381340] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:34.265 [2024-12-09 23:02:12.548808] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:35.207 23:02:13 blockdev_nvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:35.207 23:02:13 blockdev_nvme -- common/autotest_common.sh@868 -- # return 0 00:20:35.207 23:02:13 blockdev_nvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:20:35.207 23:02:13 blockdev_nvme -- bdev/blockdev.sh@736 -- # setup_nvme_conf 00:20:35.207 23:02:13 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:20:35.207 23:02:13 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:20:35.207 23:02:13 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:20:35.207 23:02:13 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:20:35.207 23:02:13 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.207 23:02:13 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:20:35.469 23:02:13 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.469 23:02:13 blockdev_nvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:20:35.469 23:02:13 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.469 23:02:13 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:20:35.469 23:02:13 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.469 23:02:13 blockdev_nvme -- bdev/blockdev.sh@777 -- # cat 00:20:35.469 23:02:13 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:20:35.469 23:02:13 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.469 23:02:13 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:20:35.469 23:02:13 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.469 23:02:13 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:20:35.469 23:02:13 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.469 23:02:13 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:20:35.469 23:02:13 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.469 23:02:13 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:20:35.469 23:02:13 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.469 23:02:13 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:20:35.469 23:02:13 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.469 23:02:13 blockdev_nvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:20:35.469 23:02:13 blockdev_nvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:20:35.469 23:02:13 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.469 23:02:13 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:20:35.469 23:02:13 blockdev_nvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:20:35.469 23:02:13 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.469 23:02:13 blockdev_nvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:20:35.469 23:02:13 blockdev_nvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "254781df-600b-4e75-9c98-9dd9118b78fe"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "254781df-600b-4e75-9c98-9dd9118b78fe",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "6feec3f5-8973-493b-8d72-cafdbc6fffe7"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "6feec3f5-8973-493b-8d72-cafdbc6fffe7",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "cbd18fd9-4b8c-493d-a40c-e4737ac93b4f"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "cbd18fd9-4b8c-493d-a40c-e4737ac93b4f",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "78724d81-de2e-4840-8d63-18ee353c032f"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "78724d81-de2e-4840-8d63-18ee353c032f",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "183d7405-49e4-41a1-89e9-21048bffab0c"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "183d7405-49e4-41a1-89e9-21048bffab0c",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true 23:02:13 blockdev_nvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:20:35.470 ,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "2a053081-41fa-4b09-982d-114e634ead35"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "2a053081-41fa-4b09-982d-114e634ead35",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:20:35.470 23:02:13 blockdev_nvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:20:35.470 23:02:13 blockdev_nvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:20:35.470 23:02:13 blockdev_nvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:20:35.470 23:02:13 blockdev_nvme -- bdev/blockdev.sh@791 -- # killprocess 60027 00:20:35.470 23:02:13 blockdev_nvme -- common/autotest_common.sh@954 -- # '[' -z 60027 ']' 00:20:35.470 23:02:13 blockdev_nvme -- common/autotest_common.sh@958 -- # kill -0 60027 00:20:35.470 23:02:13 blockdev_nvme -- common/autotest_common.sh@959 -- # uname 00:20:35.470 23:02:13 blockdev_nvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:35.470 23:02:13 blockdev_nvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60027 00:20:35.470 killing process with pid 60027 00:20:35.470 23:02:13 blockdev_nvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:35.470 23:02:13 blockdev_nvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:35.470 23:02:13 blockdev_nvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60027' 00:20:35.470 23:02:13 blockdev_nvme -- common/autotest_common.sh@973 -- # kill 60027 00:20:35.470 23:02:13 blockdev_nvme -- common/autotest_common.sh@978 -- # wait 60027 00:20:37.384 23:02:15 blockdev_nvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:20:37.384 23:02:15 blockdev_nvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:20:37.384 23:02:15 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:20:37.384 23:02:15 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:37.384 23:02:15 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:20:37.384 ************************************ 00:20:37.384 START TEST bdev_hello_world 00:20:37.384 ************************************ 00:20:37.384 23:02:15 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:20:37.384 [2024-12-09 23:02:15.779672] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:20:37.384 [2024-12-09 23:02:15.779826] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60117 ] 00:20:37.646 [2024-12-09 23:02:15.949434] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:37.906 [2024-12-09 23:02:16.157747] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:38.477 [2024-12-09 23:02:16.802270] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:20:38.477 [2024-12-09 23:02:16.802596] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:20:38.477 [2024-12-09 23:02:16.802636] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:20:38.477 [2024-12-09 23:02:16.805714] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:20:38.477 [2024-12-09 23:02:16.807011] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:20:38.477 [2024-12-09 23:02:16.807049] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:20:38.477 [2024-12-09 23:02:16.807283] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:20:38.477 00:20:38.477 [2024-12-09 23:02:16.807305] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:20:39.417 00:20:39.417 real 0m2.048s 00:20:39.417 user 0m1.616s 00:20:39.417 sys 0m0.314s 00:20:39.417 23:02:17 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:39.417 23:02:17 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:20:39.417 ************************************ 00:20:39.417 END TEST bdev_hello_world 00:20:39.417 ************************************ 00:20:39.417 23:02:17 blockdev_nvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:20:39.417 23:02:17 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:39.417 23:02:17 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:39.417 23:02:17 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:20:39.417 ************************************ 00:20:39.417 START TEST bdev_bounds 00:20:39.417 ************************************ 00:20:39.417 23:02:17 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:20:39.417 Process bdevio pid: 60163 00:20:39.417 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:39.417 23:02:17 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=60163 00:20:39.417 23:02:17 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:20:39.417 23:02:17 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 60163' 00:20:39.417 23:02:17 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 60163 00:20:39.417 23:02:17 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 60163 ']' 00:20:39.417 23:02:17 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:20:39.417 23:02:17 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:39.417 23:02:17 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:39.417 23:02:17 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:39.417 23:02:17 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:39.417 23:02:17 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:20:39.678 [2024-12-09 23:02:17.930775] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:20:39.678 [2024-12-09 23:02:17.931281] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60163 ] 00:20:39.678 [2024-12-09 23:02:18.113640] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:39.940 [2024-12-09 23:02:18.290046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:39.940 [2024-12-09 23:02:18.290469] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:39.940 [2024-12-09 23:02:18.290633] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:40.881 23:02:18 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:40.881 23:02:18 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:20:40.881 23:02:18 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:20:40.881 I/O targets: 00:20:40.881 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:20:40.881 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:20:40.881 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:20:40.881 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:20:40.881 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:20:40.881 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:20:40.881 00:20:40.881 00:20:40.881 CUnit - A unit testing framework for C - Version 2.1-3 00:20:40.881 http://cunit.sourceforge.net/ 00:20:40.881 00:20:40.881 00:20:40.881 Suite: bdevio tests on: Nvme3n1 00:20:40.881 Test: blockdev write read block ...passed 00:20:40.881 Test: blockdev write zeroes read block ...passed 00:20:40.881 Test: blockdev write zeroes read no split ...passed 00:20:40.881 Test: blockdev write zeroes read split ...passed 00:20:40.881 Test: blockdev write zeroes read split partial ...passed 00:20:40.881 Test: blockdev reset ...[2024-12-09 23:02:19.195125] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:20:40.881 passed 00:20:40.881 Test: blockdev write read 8 blocks ...[2024-12-09 23:02:19.200514] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:20:40.881 passed 00:20:40.881 Test: blockdev write read size > 128k ...passed 00:20:40.881 Test: blockdev write read invalid size ...passed 00:20:40.881 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:40.881 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:40.881 Test: blockdev write read max offset ...passed 00:20:40.881 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:40.881 Test: blockdev writev readv 8 blocks ...passed 00:20:40.881 Test: blockdev writev readv 30 x 1block ...passed 00:20:40.881 Test: blockdev writev readv block ...passed 00:20:40.881 Test: blockdev writev readv size > 128k ...passed 00:20:40.881 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:40.881 Test: blockdev comparev and writev ...[2024-12-09 23:02:19.221720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b160a000 len:0x1000 00:20:40.881 [2024-12-09 23:02:19.221809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:20:40.881 passed 00:20:40.881 Test: blockdev nvme passthru rw ...passed 00:20:40.881 Test: blockdev nvme passthru vendor specific ...[2024-12-09 23:02:19.223256] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:20:40.881 [2024-12-09 23:02:19.223298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:20:40.881 passed 00:20:40.881 Test: blockdev nvme admin passthru ...passed 00:20:40.881 Test: blockdev copy ...passed 00:20:40.881 Suite: bdevio tests on: Nvme2n3 00:20:40.881 Test: blockdev write read block ...passed 00:20:40.881 Test: blockdev write zeroes read block ...passed 00:20:40.881 Test: blockdev write zeroes read no split ...passed 00:20:40.881 Test: blockdev write zeroes read split ...passed 00:20:40.881 Test: blockdev write zeroes read split partial ...passed 00:20:40.881 Test: blockdev reset ...[2024-12-09 23:02:19.296418] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:20:40.881 [2024-12-09 23:02:19.303107] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:20:40.881 passed 00:20:40.881 Test: blockdev write read 8 blocks ...passed 00:20:40.881 Test: blockdev write read size > 128k ...passed 00:20:40.881 Test: blockdev write read invalid size ...passed 00:20:40.881 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:40.881 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:40.881 Test: blockdev write read max offset ...passed 00:20:40.881 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:40.881 Test: blockdev writev readv 8 blocks ...passed 00:20:40.881 Test: blockdev writev readv 30 x 1block ...passed 00:20:40.881 Test: blockdev writev readv block ...passed 00:20:40.881 Test: blockdev writev readv size > 128k ...passed 00:20:40.881 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:40.881 Test: blockdev comparev and writev ...[2024-12-09 23:02:19.321143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x294006000 len:0x1000 00:20:40.881 [2024-12-09 23:02:19.321329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:20:40.881 passed 00:20:40.881 Test: blockdev nvme passthru rw ...passed 00:20:40.881 Test: blockdev nvme passthru vendor specific ...[2024-12-09 23:02:19.323581] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1passed 00:20:40.881 Test: blockdev nvme admin passthru ... cid:190 PRP1 0x0 PRP2 0x0 00:20:40.881 [2024-12-09 23:02:19.323841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:20:40.881 passed 00:20:40.881 Test: blockdev copy ...passed 00:20:40.881 Suite: bdevio tests on: Nvme2n2 00:20:40.881 Test: blockdev write read block ...passed 00:20:40.881 Test: blockdev write zeroes read block ...passed 00:20:40.881 Test: blockdev write zeroes read no split ...passed 00:20:41.143 Test: blockdev write zeroes read split ...passed 00:20:41.143 Test: blockdev write zeroes read split partial ...passed 00:20:41.143 Test: blockdev reset ...[2024-12-09 23:02:19.411535] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:20:41.143 passed 00:20:41.143 Test: blockdev write read 8 blocks ...[2024-12-09 23:02:19.417011] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:20:41.143 passed 00:20:41.143 Test: blockdev write read size > 128k ...passed 00:20:41.143 Test: blockdev write read invalid size ...passed 00:20:41.143 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:41.143 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:41.143 Test: blockdev write read max offset ...passed 00:20:41.143 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:41.143 Test: blockdev writev readv 8 blocks ...passed 00:20:41.143 Test: blockdev writev readv 30 x 1block ...passed 00:20:41.143 Test: blockdev writev readv block ...passed 00:20:41.143 Test: blockdev writev readv size > 128k ...passed 00:20:41.143 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:41.143 Test: blockdev comparev and writev ...[2024-12-09 23:02:19.435068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2cc23c000 len:0x1000 00:20:41.143 [2024-12-09 23:02:19.435331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:20:41.143 passed 00:20:41.143 Test: blockdev nvme passthru rw ...passed 00:20:41.143 Test: blockdev nvme passthru vendor specific ...[2024-12-09 23:02:19.438904] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:20:41.143 [2024-12-09 23:02:19.438950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:20:41.143 passed 00:20:41.143 Test: blockdev nvme admin passthru ...passed 00:20:41.143 Test: blockdev copy ...passed 00:20:41.143 Suite: bdevio tests on: Nvme2n1 00:20:41.143 Test: blockdev write read block ...passed 00:20:41.143 Test: blockdev write zeroes read block ...passed 00:20:41.143 Test: blockdev write zeroes read no split ...passed 00:20:41.143 Test: blockdev write zeroes read split ...passed 00:20:41.143 Test: blockdev write zeroes read split partial ...passed 00:20:41.143 Test: blockdev reset ...[2024-12-09 23:02:19.510083] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:20:41.143 [2024-12-09 23:02:19.514636] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:20:41.143 passed 00:20:41.143 Test: blockdev write read 8 blocks ...passed 00:20:41.143 Test: blockdev write read size > 128k ...passed 00:20:41.143 Test: blockdev write read invalid size ...passed 00:20:41.143 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:41.143 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:41.143 Test: blockdev write read max offset ...passed 00:20:41.144 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:41.144 Test: blockdev writev readv 8 blocks ...passed 00:20:41.144 Test: blockdev writev readv 30 x 1block ...passed 00:20:41.144 Test: blockdev writev readv block ...passed 00:20:41.144 Test: blockdev writev readv size > 128k ...passed 00:20:41.144 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:41.144 Test: blockdev comparev and writev ...[2024-12-09 23:02:19.530619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2cc238000 len:0x1000 00:20:41.144 [2024-12-09 23:02:19.530876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:20:41.144 passed 00:20:41.144 Test: blockdev nvme passthru rw ...passed 00:20:41.144 Test: blockdev nvme passthru vendor specific ...passed 00:20:41.144 Test: blockdev nvme admin passthru ...[2024-12-09 23:02:19.533407] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:20:41.144 [2024-12-09 23:02:19.533489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:20:41.144 passed 00:20:41.144 Test: blockdev copy ...passed 00:20:41.144 Suite: bdevio tests on: Nvme1n1 00:20:41.144 Test: blockdev write read block ...passed 00:20:41.144 Test: blockdev write zeroes read block ...passed 00:20:41.144 Test: blockdev write zeroes read no split ...passed 00:20:41.144 Test: blockdev write zeroes read split ...passed 00:20:41.404 Test: blockdev write zeroes read split partial ...passed 00:20:41.404 Test: blockdev reset ...[2024-12-09 23:02:19.615271] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:20:41.404 [2024-12-09 23:02:19.622599] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller spassed 00:20:41.404 Test: blockdev write read 8 blocks ...uccessful. 00:20:41.404 passed 00:20:41.404 Test: blockdev write read size > 128k ...passed 00:20:41.404 Test: blockdev write read invalid size ...passed 00:20:41.404 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:41.404 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:41.404 Test: blockdev write read max offset ...passed 00:20:41.404 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:41.404 Test: blockdev writev readv 8 blocks ...passed 00:20:41.404 Test: blockdev writev readv 30 x 1block ...passed 00:20:41.404 Test: blockdev writev readv block ...passed 00:20:41.404 Test: blockdev writev readv size > 128k ...passed 00:20:41.404 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:41.405 Test: blockdev comparev and writev ...[2024-12-09 23:02:19.641994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 passed 00:20:41.405 Test: blockdev nvme passthru rw ...SGL DATA BLOCK ADDRESS 0x2cc234000 len:0x1000 00:20:41.405 [2024-12-09 23:02:19.642247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:20:41.405 passed 00:20:41.405 Test: blockdev nvme passthru vendor specific ...passed 00:20:41.405 Test: blockdev nvme admin passthru ...[2024-12-09 23:02:19.644033] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:20:41.405 [2024-12-09 23:02:19.644114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:20:41.405 passed 00:20:41.405 Test: blockdev copy ...passed 00:20:41.405 Suite: bdevio tests on: Nvme0n1 00:20:41.405 Test: blockdev write read block ...passed 00:20:41.405 Test: blockdev write zeroes read block ...passed 00:20:41.405 Test: blockdev write zeroes read no split ...passed 00:20:41.405 Test: blockdev write zeroes read split ...passed 00:20:41.405 Test: blockdev write zeroes read split partial ...passed 00:20:41.405 Test: blockdev reset ...[2024-12-09 23:02:19.723462] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:20:41.405 [2024-12-09 23:02:19.729577] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller spassed 00:20:41.405 Test: blockdev write read 8 blocks ...uccessful. 00:20:41.405 passed 00:20:41.405 Test: blockdev write read size > 128k ...passed 00:20:41.405 Test: blockdev write read invalid size ...passed 00:20:41.405 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:41.405 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:41.405 Test: blockdev write read max offset ...passed 00:20:41.405 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:41.405 Test: blockdev writev readv 8 blocks ...passed 00:20:41.405 Test: blockdev writev readv 30 x 1block ...passed 00:20:41.405 Test: blockdev writev readv block ...passed 00:20:41.405 Test: blockdev writev readv size > 128k ...passed 00:20:41.405 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:41.405 Test: blockdev comparev and writev ...passed 00:20:41.405 Test: blockdev nvme passthru rw ...[2024-12-09 23:02:19.744813] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:20:41.405 separate metadata which is not supported yet. 00:20:41.405 passed 00:20:41.405 Test: blockdev nvme passthru vendor specific ...passed 00:20:41.405 Test: blockdev nvme admin passthru ...[2024-12-09 23:02:19.746457] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:20:41.405 [2024-12-09 23:02:19.746547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:20:41.405 passed 00:20:41.405 Test: blockdev copy ...passed 00:20:41.405 00:20:41.405 Run Summary: Type Total Ran Passed Failed Inactive 00:20:41.405 suites 6 6 n/a 0 0 00:20:41.405 tests 138 138 138 0 0 00:20:41.405 asserts 893 893 893 0 n/a 00:20:41.405 00:20:41.405 Elapsed time = 1.605 seconds 00:20:41.405 0 00:20:41.405 23:02:19 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 60163 00:20:41.405 23:02:19 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 60163 ']' 00:20:41.405 23:02:19 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 60163 00:20:41.405 23:02:19 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:20:41.405 23:02:19 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:41.405 23:02:19 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60163 00:20:41.405 23:02:19 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:41.405 23:02:19 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:41.405 23:02:19 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60163' 00:20:41.405 killing process with pid 60163 00:20:41.405 23:02:19 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 60163 00:20:41.405 23:02:19 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 60163 00:20:42.345 23:02:20 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:20:42.345 ************************************ 00:20:42.345 END TEST bdev_bounds 00:20:42.345 ************************************ 00:20:42.345 00:20:42.345 real 0m2.827s 00:20:42.345 user 0m6.868s 00:20:42.345 sys 0m0.518s 00:20:42.345 23:02:20 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:42.345 23:02:20 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:20:42.345 23:02:20 blockdev_nvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:20:42.345 23:02:20 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:20:42.345 23:02:20 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:42.345 23:02:20 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:20:42.345 ************************************ 00:20:42.345 START TEST bdev_nbd 00:20:42.345 ************************************ 00:20:42.345 23:02:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:20:42.345 23:02:20 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:20:42.345 23:02:20 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:20:42.345 23:02:20 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:42.345 23:02:20 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:20:42.345 23:02:20 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:20:42.345 23:02:20 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:20:42.345 23:02:20 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:20:42.345 23:02:20 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:20:42.345 23:02:20 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:20:42.345 23:02:20 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:20:42.345 23:02:20 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:20:42.345 23:02:20 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:20:42.345 23:02:20 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:20:42.345 23:02:20 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:20:42.345 23:02:20 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:20:42.345 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:20:42.345 23:02:20 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=60224 00:20:42.345 23:02:20 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:20:42.345 23:02:20 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 60224 /var/tmp/spdk-nbd.sock 00:20:42.345 23:02:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 60224 ']' 00:20:42.345 23:02:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:20:42.345 23:02:20 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:20:42.345 23:02:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:42.345 23:02:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:20:42.345 23:02:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:42.345 23:02:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:20:42.606 [2024-12-09 23:02:20.815817] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:20:42.606 [2024-12-09 23:02:20.816330] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:42.606 [2024-12-09 23:02:20.996520] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:42.867 [2024-12-09 23:02:21.137332] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:43.439 23:02:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:43.439 23:02:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:20:43.439 23:02:21 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:20:43.439 23:02:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:43.439 23:02:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:20:43.439 23:02:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:20:43.439 23:02:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:20:43.439 23:02:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:43.439 23:02:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:20:43.439 23:02:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:20:43.439 23:02:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:20:43.439 23:02:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:20:43.439 23:02:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:20:43.439 23:02:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:20:43.439 23:02:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:20:43.702 23:02:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:20:43.702 23:02:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:20:43.702 23:02:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:20:43.702 23:02:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:20:43.702 23:02:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:20:43.702 23:02:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:43.702 23:02:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:43.702 23:02:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:20:43.702 23:02:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:20:43.702 23:02:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:43.702 23:02:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:43.702 23:02:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:43.702 1+0 records in 00:20:43.702 1+0 records out 00:20:43.702 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00108695 s, 3.8 MB/s 00:20:43.702 23:02:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:43.702 23:02:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:20:43.702 23:02:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:43.702 23:02:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:43.702 23:02:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:20:43.702 23:02:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:20:43.702 23:02:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:20:43.702 23:02:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:20:43.964 23:02:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:20:43.964 23:02:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:20:43.964 23:02:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:20:43.964 23:02:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:20:43.964 23:02:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:20:43.964 23:02:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:43.964 23:02:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:43.964 23:02:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:20:43.964 23:02:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:20:43.964 23:02:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:43.964 23:02:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:43.964 23:02:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:43.964 1+0 records in 00:20:43.964 1+0 records out 00:20:43.964 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00157455 s, 2.6 MB/s 00:20:43.964 23:02:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:43.964 23:02:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:20:43.964 23:02:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:43.964 23:02:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:43.964 23:02:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:20:43.964 23:02:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:20:43.964 23:02:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:20:43.964 23:02:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:20:44.235 23:02:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:20:44.235 23:02:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:20:44.235 23:02:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:20:44.235 23:02:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:20:44.235 23:02:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:20:44.235 23:02:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:44.235 23:02:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:44.235 23:02:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:20:44.235 23:02:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:20:44.235 23:02:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:44.235 23:02:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:44.236 23:02:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:44.236 1+0 records in 00:20:44.236 1+0 records out 00:20:44.236 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000850297 s, 4.8 MB/s 00:20:44.236 23:02:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:44.236 23:02:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:20:44.236 23:02:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:44.236 23:02:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:44.236 23:02:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:20:44.236 23:02:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:20:44.236 23:02:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:20:44.236 23:02:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:20:44.502 23:02:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:20:44.502 23:02:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:20:44.502 23:02:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:20:44.502 23:02:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:20:44.502 23:02:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:20:44.502 23:02:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:44.502 23:02:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:44.502 23:02:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:20:44.502 23:02:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:20:44.502 23:02:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:44.502 23:02:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:44.502 23:02:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:44.502 1+0 records in 00:20:44.502 1+0 records out 00:20:44.502 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00126952 s, 3.2 MB/s 00:20:44.502 23:02:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:44.502 23:02:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:20:44.502 23:02:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:44.502 23:02:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:44.502 23:02:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:20:44.502 23:02:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:20:44.502 23:02:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:20:44.502 23:02:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:20:44.763 23:02:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:20:44.763 23:02:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:20:44.763 23:02:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:20:44.763 23:02:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:20:44.763 23:02:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:20:44.763 23:02:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:44.763 23:02:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:44.763 23:02:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:20:44.763 23:02:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:20:44.763 23:02:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:44.763 23:02:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:44.763 23:02:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:44.763 1+0 records in 00:20:44.763 1+0 records out 00:20:44.763 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00113858 s, 3.6 MB/s 00:20:44.763 23:02:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:44.763 23:02:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:20:44.763 23:02:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:44.763 23:02:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:44.763 23:02:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:20:44.763 23:02:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:20:44.763 23:02:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:20:44.763 23:02:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:20:45.025 23:02:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:20:45.025 23:02:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:20:45.025 23:02:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:20:45.025 23:02:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:20:45.025 23:02:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:20:45.025 23:02:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:45.025 23:02:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:45.025 23:02:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:20:45.025 23:02:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:20:45.025 23:02:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:45.025 23:02:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:45.025 23:02:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:45.025 1+0 records in 00:20:45.025 1+0 records out 00:20:45.025 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00122612 s, 3.3 MB/s 00:20:45.025 23:02:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:45.025 23:02:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:20:45.025 23:02:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:45.025 23:02:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:45.025 23:02:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:20:45.025 23:02:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:20:45.025 23:02:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:20:45.025 23:02:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:20:45.285 23:02:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:20:45.285 { 00:20:45.285 "nbd_device": "/dev/nbd0", 00:20:45.285 "bdev_name": "Nvme0n1" 00:20:45.285 }, 00:20:45.285 { 00:20:45.285 "nbd_device": "/dev/nbd1", 00:20:45.285 "bdev_name": "Nvme1n1" 00:20:45.285 }, 00:20:45.285 { 00:20:45.285 "nbd_device": "/dev/nbd2", 00:20:45.285 "bdev_name": "Nvme2n1" 00:20:45.285 }, 00:20:45.285 { 00:20:45.285 "nbd_device": "/dev/nbd3", 00:20:45.285 "bdev_name": "Nvme2n2" 00:20:45.285 }, 00:20:45.285 { 00:20:45.285 "nbd_device": "/dev/nbd4", 00:20:45.285 "bdev_name": "Nvme2n3" 00:20:45.285 }, 00:20:45.285 { 00:20:45.285 "nbd_device": "/dev/nbd5", 00:20:45.285 "bdev_name": "Nvme3n1" 00:20:45.285 } 00:20:45.285 ]' 00:20:45.285 23:02:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:20:45.285 23:02:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:20:45.285 { 00:20:45.285 "nbd_device": "/dev/nbd0", 00:20:45.285 "bdev_name": "Nvme0n1" 00:20:45.285 }, 00:20:45.285 { 00:20:45.285 "nbd_device": "/dev/nbd1", 00:20:45.285 "bdev_name": "Nvme1n1" 00:20:45.285 }, 00:20:45.285 { 00:20:45.285 "nbd_device": "/dev/nbd2", 00:20:45.285 "bdev_name": "Nvme2n1" 00:20:45.285 }, 00:20:45.285 { 00:20:45.285 "nbd_device": "/dev/nbd3", 00:20:45.285 "bdev_name": "Nvme2n2" 00:20:45.285 }, 00:20:45.285 { 00:20:45.285 "nbd_device": "/dev/nbd4", 00:20:45.285 "bdev_name": "Nvme2n3" 00:20:45.285 }, 00:20:45.285 { 00:20:45.285 "nbd_device": "/dev/nbd5", 00:20:45.286 "bdev_name": "Nvme3n1" 00:20:45.286 } 00:20:45.286 ]' 00:20:45.286 23:02:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:20:45.286 23:02:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:20:45.286 23:02:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:45.286 23:02:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:20:45.286 23:02:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:45.286 23:02:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:20:45.286 23:02:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:45.286 23:02:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:20:45.546 23:02:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:45.546 23:02:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:45.546 23:02:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:45.546 23:02:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:45.546 23:02:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:45.546 23:02:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:45.546 23:02:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:45.546 23:02:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:45.546 23:02:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:45.546 23:02:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:20:45.808 23:02:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:45.808 23:02:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:45.808 23:02:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:45.808 23:02:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:45.808 23:02:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:45.808 23:02:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:45.808 23:02:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:45.808 23:02:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:45.808 23:02:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:45.808 23:02:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:20:46.068 23:02:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:20:46.068 23:02:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:20:46.068 23:02:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:20:46.068 23:02:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:46.068 23:02:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:46.068 23:02:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:20:46.068 23:02:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:46.068 23:02:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:46.068 23:02:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:46.068 23:02:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:20:46.328 23:02:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:20:46.328 23:02:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:20:46.328 23:02:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:20:46.328 23:02:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:46.328 23:02:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:46.328 23:02:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:20:46.328 23:02:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:46.328 23:02:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:46.328 23:02:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:46.328 23:02:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:20:46.589 23:02:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:20:46.589 23:02:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:20:46.589 23:02:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:20:46.589 23:02:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:46.589 23:02:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:46.589 23:02:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:20:46.589 23:02:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:46.589 23:02:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:46.589 23:02:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:46.589 23:02:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:20:46.849 23:02:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:20:46.849 23:02:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:20:46.849 23:02:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:20:46.849 23:02:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:46.849 23:02:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:46.849 23:02:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:20:46.849 23:02:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:46.849 23:02:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:46.849 23:02:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:20:46.849 23:02:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:46.849 23:02:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:20:47.110 23:02:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:20:47.110 23:02:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:20:47.110 23:02:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:20:47.110 23:02:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:20:47.110 23:02:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:20:47.110 23:02:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:20:47.110 23:02:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:20:47.110 23:02:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:20:47.110 23:02:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:20:47.110 23:02:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:20:47.110 23:02:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:20:47.110 23:02:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:20:47.110 23:02:25 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:20:47.110 23:02:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:47.110 23:02:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:20:47.110 23:02:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:20:47.110 23:02:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:20:47.110 23:02:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:20:47.110 23:02:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:20:47.110 23:02:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:47.110 23:02:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:20:47.110 23:02:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:47.110 23:02:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:20:47.110 23:02:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:47.110 23:02:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:20:47.110 23:02:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:47.110 23:02:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:20:47.110 23:02:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:20:47.372 /dev/nbd0 00:20:47.372 23:02:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:47.372 23:02:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:47.372 23:02:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:20:47.372 23:02:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:20:47.372 23:02:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:47.372 23:02:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:47.372 23:02:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:20:47.372 23:02:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:20:47.372 23:02:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:47.372 23:02:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:47.372 23:02:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:47.372 1+0 records in 00:20:47.372 1+0 records out 00:20:47.372 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00149725 s, 2.7 MB/s 00:20:47.372 23:02:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:47.372 23:02:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:20:47.372 23:02:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:47.372 23:02:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:47.372 23:02:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:20:47.372 23:02:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:47.372 23:02:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:20:47.372 23:02:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:20:47.634 /dev/nbd1 00:20:47.634 23:02:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:47.634 23:02:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:47.634 23:02:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:20:47.634 23:02:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:20:47.634 23:02:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:47.634 23:02:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:47.634 23:02:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:20:47.634 23:02:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:20:47.634 23:02:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:47.634 23:02:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:47.634 23:02:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:47.634 1+0 records in 00:20:47.634 1+0 records out 00:20:47.634 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0010099 s, 4.1 MB/s 00:20:47.634 23:02:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:47.634 23:02:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:20:47.634 23:02:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:47.634 23:02:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:47.634 23:02:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:20:47.634 23:02:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:47.634 23:02:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:20:47.634 23:02:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:20:47.896 /dev/nbd10 00:20:47.896 23:02:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:20:47.896 23:02:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:20:47.896 23:02:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:20:47.896 23:02:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:20:47.896 23:02:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:47.896 23:02:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:47.896 23:02:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:20:47.896 23:02:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:20:47.896 23:02:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:47.896 23:02:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:47.896 23:02:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:47.896 1+0 records in 00:20:47.896 1+0 records out 00:20:47.896 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00145922 s, 2.8 MB/s 00:20:47.896 23:02:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:47.896 23:02:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:20:47.896 23:02:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:47.896 23:02:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:47.896 23:02:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:20:47.896 23:02:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:47.896 23:02:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:20:47.896 23:02:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:20:48.156 /dev/nbd11 00:20:48.156 23:02:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:20:48.156 23:02:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:20:48.156 23:02:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:20:48.156 23:02:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:20:48.156 23:02:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:48.156 23:02:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:48.156 23:02:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:20:48.156 23:02:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:20:48.156 23:02:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:48.156 23:02:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:48.156 23:02:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:48.156 1+0 records in 00:20:48.156 1+0 records out 00:20:48.157 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000916598 s, 4.5 MB/s 00:20:48.157 23:02:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:48.157 23:02:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:20:48.157 23:02:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:48.157 23:02:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:48.157 23:02:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:20:48.157 23:02:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:48.157 23:02:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:20:48.157 23:02:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:20:48.418 /dev/nbd12 00:20:48.418 23:02:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:20:48.418 23:02:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:20:48.418 23:02:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:20:48.418 23:02:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:20:48.418 23:02:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:48.418 23:02:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:48.418 23:02:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:20:48.418 23:02:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:20:48.418 23:02:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:48.418 23:02:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:48.418 23:02:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:48.418 1+0 records in 00:20:48.418 1+0 records out 00:20:48.418 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00117036 s, 3.5 MB/s 00:20:48.418 23:02:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:48.418 23:02:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:20:48.418 23:02:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:48.418 23:02:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:48.418 23:02:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:20:48.418 23:02:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:48.418 23:02:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:20:48.418 23:02:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:20:48.678 /dev/nbd13 00:20:48.678 23:02:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:20:48.678 23:02:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:20:48.678 23:02:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:20:48.678 23:02:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:20:48.678 23:02:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:48.678 23:02:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:48.678 23:02:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:20:48.678 23:02:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:20:48.678 23:02:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:48.678 23:02:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:48.678 23:02:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:48.678 1+0 records in 00:20:48.678 1+0 records out 00:20:48.678 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00101916 s, 4.0 MB/s 00:20:48.678 23:02:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:48.678 23:02:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:20:48.678 23:02:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:48.678 23:02:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:48.678 23:02:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:20:48.678 23:02:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:48.678 23:02:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:20:48.678 23:02:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:20:48.678 23:02:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:48.678 23:02:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:20:48.938 23:02:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:20:48.938 { 00:20:48.938 "nbd_device": "/dev/nbd0", 00:20:48.938 "bdev_name": "Nvme0n1" 00:20:48.938 }, 00:20:48.938 { 00:20:48.938 "nbd_device": "/dev/nbd1", 00:20:48.938 "bdev_name": "Nvme1n1" 00:20:48.938 }, 00:20:48.938 { 00:20:48.938 "nbd_device": "/dev/nbd10", 00:20:48.939 "bdev_name": "Nvme2n1" 00:20:48.939 }, 00:20:48.939 { 00:20:48.939 "nbd_device": "/dev/nbd11", 00:20:48.939 "bdev_name": "Nvme2n2" 00:20:48.939 }, 00:20:48.939 { 00:20:48.939 "nbd_device": "/dev/nbd12", 00:20:48.939 "bdev_name": "Nvme2n3" 00:20:48.939 }, 00:20:48.939 { 00:20:48.939 "nbd_device": "/dev/nbd13", 00:20:48.939 "bdev_name": "Nvme3n1" 00:20:48.939 } 00:20:48.939 ]' 00:20:48.939 23:02:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:20:48.939 { 00:20:48.939 "nbd_device": "/dev/nbd0", 00:20:48.939 "bdev_name": "Nvme0n1" 00:20:48.939 }, 00:20:48.939 { 00:20:48.939 "nbd_device": "/dev/nbd1", 00:20:48.939 "bdev_name": "Nvme1n1" 00:20:48.939 }, 00:20:48.939 { 00:20:48.939 "nbd_device": "/dev/nbd10", 00:20:48.939 "bdev_name": "Nvme2n1" 00:20:48.939 }, 00:20:48.939 { 00:20:48.939 "nbd_device": "/dev/nbd11", 00:20:48.939 "bdev_name": "Nvme2n2" 00:20:48.939 }, 00:20:48.939 { 00:20:48.939 "nbd_device": "/dev/nbd12", 00:20:48.939 "bdev_name": "Nvme2n3" 00:20:48.939 }, 00:20:48.939 { 00:20:48.939 "nbd_device": "/dev/nbd13", 00:20:48.939 "bdev_name": "Nvme3n1" 00:20:48.939 } 00:20:48.939 ]' 00:20:48.939 23:02:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:20:48.939 23:02:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:20:48.939 /dev/nbd1 00:20:48.939 /dev/nbd10 00:20:48.939 /dev/nbd11 00:20:48.939 /dev/nbd12 00:20:48.939 /dev/nbd13' 00:20:48.939 23:02:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:20:48.939 /dev/nbd1 00:20:48.939 /dev/nbd10 00:20:48.939 /dev/nbd11 00:20:48.939 /dev/nbd12 00:20:48.939 /dev/nbd13' 00:20:48.939 23:02:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:20:48.939 23:02:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:20:48.939 23:02:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:20:48.939 23:02:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:20:48.939 23:02:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:20:48.939 23:02:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:20:48.939 23:02:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:20:48.939 23:02:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:20:48.939 23:02:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:20:48.939 23:02:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:20:48.939 23:02:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:20:48.939 23:02:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:20:48.939 256+0 records in 00:20:48.939 256+0 records out 00:20:48.939 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00857177 s, 122 MB/s 00:20:48.939 23:02:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:20:48.939 23:02:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:20:49.200 256+0 records in 00:20:49.200 256+0 records out 00:20:49.200 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.244905 s, 4.3 MB/s 00:20:49.200 23:02:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:20:49.200 23:02:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:20:49.784 256+0 records in 00:20:49.784 256+0 records out 00:20:49.784 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.436076 s, 2.4 MB/s 00:20:49.784 23:02:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:20:49.784 23:02:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:20:50.739 256+0 records in 00:20:50.739 256+0 records out 00:20:50.739 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.806285 s, 1.3 MB/s 00:20:50.739 23:02:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:20:50.739 23:02:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:20:50.739 256+0 records in 00:20:50.739 256+0 records out 00:20:50.739 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.278241 s, 3.8 MB/s 00:20:50.739 23:02:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:20:50.739 23:02:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:20:51.016 256+0 records in 00:20:51.016 256+0 records out 00:20:51.016 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.257821 s, 4.1 MB/s 00:20:51.016 23:02:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:20:51.016 23:02:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:20:51.276 256+0 records in 00:20:51.276 256+0 records out 00:20:51.276 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.291523 s, 3.6 MB/s 00:20:51.276 23:02:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:20:51.276 23:02:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:20:51.276 23:02:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:20:51.276 23:02:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:20:51.276 23:02:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:20:51.276 23:02:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:20:51.276 23:02:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:20:51.276 23:02:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:20:51.276 23:02:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:20:51.276 23:02:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:20:51.276 23:02:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:20:51.276 23:02:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:20:51.276 23:02:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:20:51.276 23:02:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:20:51.276 23:02:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:20:51.276 23:02:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:20:51.276 23:02:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:20:51.276 23:02:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:20:51.276 23:02:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:20:51.538 23:02:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:20:51.538 23:02:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:20:51.538 23:02:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:51.538 23:02:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:20:51.538 23:02:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:51.538 23:02:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:20:51.538 23:02:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:51.538 23:02:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:20:51.538 23:02:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:51.538 23:02:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:51.538 23:02:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:51.538 23:02:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:51.538 23:02:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:51.538 23:02:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:51.538 23:02:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:51.538 23:02:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:51.538 23:02:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:51.538 23:02:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:20:51.800 23:02:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:52.059 23:02:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:52.059 23:02:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:52.059 23:02:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:52.059 23:02:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:52.059 23:02:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:52.059 23:02:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:52.059 23:02:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:52.059 23:02:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:52.059 23:02:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:20:52.059 23:02:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:20:52.059 23:02:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:20:52.059 23:02:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:20:52.059 23:02:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:52.059 23:02:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:52.059 23:02:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:20:52.059 23:02:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:52.059 23:02:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:52.059 23:02:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:52.059 23:02:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:20:52.320 23:02:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:20:52.320 23:02:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:20:52.320 23:02:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:20:52.320 23:02:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:52.320 23:02:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:52.320 23:02:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:20:52.320 23:02:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:52.320 23:02:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:52.320 23:02:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:52.320 23:02:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:20:52.585 23:02:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:20:52.585 23:02:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:20:52.585 23:02:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:20:52.586 23:02:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:52.586 23:02:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:52.586 23:02:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:20:52.586 23:02:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:52.586 23:02:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:52.586 23:02:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:52.586 23:02:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:20:52.846 23:02:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:20:52.846 23:02:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:20:52.846 23:02:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:20:52.846 23:02:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:52.846 23:02:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:52.846 23:02:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:20:52.846 23:02:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:52.846 23:02:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:52.846 23:02:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:20:52.846 23:02:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:52.846 23:02:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:20:53.106 23:02:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:20:53.106 23:02:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:20:53.106 23:02:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:20:53.106 23:02:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:20:53.106 23:02:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:20:53.106 23:02:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:20:53.106 23:02:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:20:53.106 23:02:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:20:53.106 23:02:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:20:53.106 23:02:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:20:53.106 23:02:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:20:53.106 23:02:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:20:53.106 23:02:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:20:53.106 23:02:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:53.106 23:02:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:20:53.106 23:02:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:20:53.374 malloc_lvol_verify 00:20:53.374 23:02:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:20:53.635 cea2256e-9e52-4d24-b886-e6cceaa26586 00:20:53.635 23:02:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:20:53.896 e49264a5-deaa-4e01-b2de-dd8b55a7f593 00:20:53.896 23:02:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:20:54.157 /dev/nbd0 00:20:54.157 23:02:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:20:54.157 23:02:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:20:54.157 23:02:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:20:54.157 23:02:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:20:54.157 23:02:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:20:54.157 mke2fs 1.47.0 (5-Feb-2023) 00:20:54.157 Discarding device blocks: 0/4096 done 00:20:54.157 Creating filesystem with 4096 1k blocks and 1024 inodes 00:20:54.157 00:20:54.157 Allocating group tables: 0/1 done 00:20:54.157 Writing inode tables: 0/1 done 00:20:54.157 Creating journal (1024 blocks): done 00:20:54.157 Writing superblocks and filesystem accounting information: 0/1 done 00:20:54.157 00:20:54.158 23:02:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:20:54.158 23:02:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:54.158 23:02:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:54.158 23:02:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:54.158 23:02:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:20:54.158 23:02:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:54.158 23:02:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:20:54.418 23:02:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:54.418 23:02:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:54.418 23:02:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:54.418 23:02:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:54.418 23:02:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:54.418 23:02:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:54.418 23:02:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:54.418 23:02:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:54.418 23:02:32 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 60224 00:20:54.418 23:02:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 60224 ']' 00:20:54.418 23:02:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 60224 00:20:54.418 23:02:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:20:54.418 23:02:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:54.418 23:02:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60224 00:20:54.418 killing process with pid 60224 00:20:54.418 23:02:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:54.418 23:02:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:54.418 23:02:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60224' 00:20:54.418 23:02:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 60224 00:20:54.418 23:02:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 60224 00:20:55.375 23:02:33 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:20:55.375 00:20:55.375 real 0m12.926s 00:20:55.375 user 0m17.090s 00:20:55.375 sys 0m4.243s 00:20:55.375 23:02:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:55.375 23:02:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:20:55.375 ************************************ 00:20:55.375 END TEST bdev_nbd 00:20:55.375 ************************************ 00:20:55.375 skipping fio tests on NVMe due to multi-ns failures. 00:20:55.375 23:02:33 blockdev_nvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:20:55.375 23:02:33 blockdev_nvme -- bdev/blockdev.sh@801 -- # '[' nvme = nvme ']' 00:20:55.375 23:02:33 blockdev_nvme -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:20:55.375 23:02:33 blockdev_nvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:20:55.375 23:02:33 blockdev_nvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:20:55.375 23:02:33 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:20:55.375 23:02:33 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:55.375 23:02:33 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:20:55.375 ************************************ 00:20:55.375 START TEST bdev_verify 00:20:55.375 ************************************ 00:20:55.375 23:02:33 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:20:55.375 [2024-12-09 23:02:33.804920] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:20:55.375 [2024-12-09 23:02:33.805088] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60634 ] 00:20:55.635 [2024-12-09 23:02:33.971252] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:55.897 [2024-12-09 23:02:34.131434] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:55.897 [2024-12-09 23:02:34.131477] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:56.483 Running I/O for 5 seconds... 00:20:58.816 16192.00 IOPS, 63.25 MiB/s [2024-12-09T23:02:38.220Z] 16448.00 IOPS, 64.25 MiB/s [2024-12-09T23:02:39.162Z] 16192.00 IOPS, 63.25 MiB/s [2024-12-09T23:02:40.107Z] 16320.00 IOPS, 63.75 MiB/s 00:21:01.646 Latency(us) 00:21:01.646 [2024-12-09T23:02:40.108Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:01.646 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:01.646 Verification LBA range: start 0x0 length 0xbd0bd 00:21:01.646 Nvme0n1 : 5.07 1275.21 4.98 0.00 0.00 99926.20 12804.73 464599.83 00:21:01.646 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:01.646 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:21:01.646 Nvme0n1 : 5.08 1228.16 4.80 0.00 0.00 103829.51 7662.67 451694.28 00:21:01.646 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:01.646 Verification LBA range: start 0x0 length 0xa0000 00:21:01.646 Nvme1n1 : 5.07 1274.04 4.98 0.00 0.00 99842.88 15426.17 467826.22 00:21:01.646 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:01.646 Verification LBA range: start 0xa0000 length 0xa0000 00:21:01.646 Nvme1n1 : 5.09 1227.02 4.79 0.00 0.00 103552.36 9074.22 451694.28 00:21:01.646 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:01.646 Verification LBA range: start 0x0 length 0x80000 00:21:01.646 Nvme2n1 : 5.10 1281.02 5.00 0.00 0.00 99187.76 15930.29 471052.60 00:21:01.646 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:01.646 Verification LBA range: start 0x80000 length 0x80000 00:21:01.646 Nvme2n1 : 5.09 1226.98 4.79 0.00 0.00 103334.61 9326.28 451694.28 00:21:01.646 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:01.646 Verification LBA range: start 0x0 length 0x80000 00:21:01.646 Nvme2n2 : 5.10 1280.00 5.00 0.00 0.00 99025.68 19156.68 480731.77 00:21:01.646 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:01.646 Verification LBA range: start 0x80000 length 0x80000 00:21:01.646 Nvme2n2 : 5.09 1231.58 4.81 0.00 0.00 102933.69 7612.26 474278.99 00:21:01.646 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:01.646 Verification LBA range: start 0x0 length 0x80000 00:21:01.646 Nvme2n3 : 5.10 1279.64 5.00 0.00 0.00 98874.42 19358.33 487184.54 00:21:01.646 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:01.646 Verification LBA range: start 0x80000 length 0x80000 00:21:01.646 Nvme2n3 : 5.09 1231.22 4.81 0.00 0.00 102698.49 7662.67 471052.60 00:21:01.646 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:01.646 Verification LBA range: start 0x0 length 0x20000 00:21:01.646 Nvme3n1 : 5.09 1270.25 4.96 0.00 0.00 99475.82 17946.78 496863.70 00:21:01.646 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:01.646 Verification LBA range: start 0x20000 length 0x20000 00:21:01.646 Nvme3n1 : 5.10 1230.81 4.81 0.00 0.00 102468.56 8015.56 467826.22 00:21:01.646 [2024-12-09T23:02:40.108Z] =================================================================================================================== 00:21:01.646 [2024-12-09T23:02:40.108Z] Total : 15035.94 58.73 0.00 0.00 101225.83 7612.26 496863.70 00:21:03.029 00:21:03.029 real 0m7.751s 00:21:03.029 user 0m14.292s 00:21:03.029 sys 0m0.326s 00:21:03.029 ************************************ 00:21:03.029 END TEST bdev_verify 00:21:03.029 ************************************ 00:21:03.029 23:02:41 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:03.029 23:02:41 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:21:03.289 23:02:41 blockdev_nvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:21:03.289 23:02:41 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:21:03.289 23:02:41 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:03.289 23:02:41 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:21:03.289 ************************************ 00:21:03.289 START TEST bdev_verify_big_io 00:21:03.289 ************************************ 00:21:03.289 23:02:41 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:21:03.289 [2024-12-09 23:02:41.630787] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:21:03.289 [2024-12-09 23:02:41.630956] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60732 ] 00:21:03.548 [2024-12-09 23:02:41.796066] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:03.548 [2024-12-09 23:02:41.942692] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:03.548 [2024-12-09 23:02:41.942837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:04.492 Running I/O for 5 seconds... 00:21:09.607 1704.00 IOPS, 106.50 MiB/s [2024-12-09T23:02:48.717Z] 2392.50 IOPS, 149.53 MiB/s [2024-12-09T23:02:48.717Z] 3231.33 IOPS, 201.96 MiB/s 00:21:10.255 Latency(us) 00:21:10.255 [2024-12-09T23:02:48.717Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:10.255 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:21:10.255 Verification LBA range: start 0x0 length 0xbd0b 00:21:10.255 Nvme0n1 : 5.54 138.51 8.66 0.00 0.00 889799.81 20568.22 974369.08 00:21:10.255 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:21:10.255 Verification LBA range: start 0xbd0b length 0xbd0b 00:21:10.255 Nvme0n1 : 5.66 135.75 8.48 0.00 0.00 914879.02 34683.67 955010.76 00:21:10.255 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:21:10.255 Verification LBA range: start 0x0 length 0xa000 00:21:10.255 Nvme1n1 : 5.65 140.31 8.77 0.00 0.00 851692.35 100018.02 809823.31 00:21:10.255 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:21:10.255 Verification LBA range: start 0xa000 length 0xa000 00:21:10.255 Nvme1n1 : 5.66 135.68 8.48 0.00 0.00 889189.61 100824.62 806596.92 00:21:10.255 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:21:10.255 Verification LBA range: start 0x0 length 0x8000 00:21:10.255 Nvme2n1 : 5.70 145.31 9.08 0.00 0.00 805223.25 42346.34 709805.29 00:21:10.255 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:21:10.255 Verification LBA range: start 0x8000 length 0x8000 00:21:10.255 Nvme2n1 : 5.66 135.61 8.48 0.00 0.00 863468.31 117763.15 819502.47 00:21:10.255 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:21:10.255 Verification LBA range: start 0x0 length 0x8000 00:21:10.255 Nvme2n2 : 5.83 140.92 8.81 0.00 0.00 802923.52 84692.68 1574477.19 00:21:10.255 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:21:10.255 Verification LBA range: start 0x8000 length 0x8000 00:21:10.255 Nvme2n2 : 5.78 143.21 8.95 0.00 0.00 799002.20 40329.85 1019538.51 00:21:10.255 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:21:10.255 Verification LBA range: start 0x0 length 0x8000 00:21:10.255 Nvme2n3 : 5.85 149.81 9.36 0.00 0.00 739041.67 22383.06 1593835.52 00:21:10.255 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:21:10.255 Verification LBA range: start 0x8000 length 0x8000 00:21:10.255 Nvme2n3 : 5.84 149.02 9.31 0.00 0.00 746394.44 37103.46 909841.33 00:21:10.255 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:21:10.255 Verification LBA range: start 0x0 length 0x2000 00:21:10.255 Nvme3n1 : 5.88 164.56 10.29 0.00 0.00 653595.75 4335.46 1619646.62 00:21:10.255 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:21:10.255 Verification LBA range: start 0x2000 length 0x2000 00:21:10.255 Nvme3n1 : 5.86 163.82 10.24 0.00 0.00 662830.19 2697.06 942105.21 00:21:10.255 [2024-12-09T23:02:48.717Z] =================================================================================================================== 00:21:10.255 [2024-12-09T23:02:48.717Z] Total : 1742.50 108.91 0.00 0.00 794786.62 2697.06 1619646.62 00:21:12.865 00:21:12.865 real 0m9.746s 00:21:12.865 user 0m18.268s 00:21:12.865 sys 0m0.372s 00:21:12.865 ************************************ 00:21:12.865 END TEST bdev_verify_big_io 00:21:12.865 ************************************ 00:21:12.865 23:02:51 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:12.865 23:02:51 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:21:13.131 23:02:51 blockdev_nvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:13.131 23:02:51 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:21:13.131 23:02:51 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:13.131 23:02:51 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:21:13.131 ************************************ 00:21:13.131 START TEST bdev_write_zeroes 00:21:13.131 ************************************ 00:21:13.131 23:02:51 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:13.131 [2024-12-09 23:02:51.452720] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:21:13.131 [2024-12-09 23:02:51.452863] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60842 ] 00:21:13.392 [2024-12-09 23:02:51.616475] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:13.392 [2024-12-09 23:02:51.757259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:13.963 Running I/O for 1 seconds... 00:21:15.345 39331.00 IOPS, 153.64 MiB/s 00:21:15.345 Latency(us) 00:21:15.345 [2024-12-09T23:02:53.807Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:15.345 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:21:15.345 Nvme0n1 : 1.02 6408.51 25.03 0.00 0.00 19875.25 5142.06 43959.53 00:21:15.345 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:21:15.345 Nvme1n1 : 1.03 6604.27 25.80 0.00 0.00 19252.59 8872.57 29440.79 00:21:15.345 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:21:15.345 Nvme2n1 : 1.03 6624.89 25.88 0.00 0.00 19130.08 8872.57 27827.59 00:21:15.345 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:21:15.345 Nvme2n2 : 1.03 6616.91 25.85 0.00 0.00 19095.77 8015.56 28230.89 00:21:15.345 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:21:15.345 Nvme2n3 : 1.04 6606.64 25.81 0.00 0.00 19086.63 7309.78 29844.09 00:21:15.345 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:21:15.345 Nvme3n1 : 1.03 6571.47 25.67 0.00 0.00 19121.33 7511.43 27827.59 00:21:15.345 [2024-12-09T23:02:53.807Z] =================================================================================================================== 00:21:15.345 [2024-12-09T23:02:53.807Z] Total : 39432.69 154.03 0.00 0.00 19256.24 5142.06 43959.53 00:21:15.917 00:21:15.917 real 0m2.932s 00:21:15.917 user 0m2.544s 00:21:15.917 sys 0m0.260s 00:21:15.917 ************************************ 00:21:15.917 END TEST bdev_write_zeroes 00:21:15.917 23:02:54 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:15.917 23:02:54 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:21:15.917 ************************************ 00:21:15.917 23:02:54 blockdev_nvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:15.917 23:02:54 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:21:15.917 23:02:54 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:15.917 23:02:54 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:21:16.187 ************************************ 00:21:16.187 START TEST bdev_json_nonenclosed 00:21:16.187 ************************************ 00:21:16.187 23:02:54 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:16.187 [2024-12-09 23:02:54.468989] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:21:16.187 [2024-12-09 23:02:54.469153] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60895 ] 00:21:16.187 [2024-12-09 23:02:54.629389] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:16.450 [2024-12-09 23:02:54.775625] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:16.450 [2024-12-09 23:02:54.775753] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:21:16.450 [2024-12-09 23:02:54.775773] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:21:16.450 [2024-12-09 23:02:54.775784] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:21:16.711 00:21:16.711 real 0m0.595s 00:21:16.711 user 0m0.367s 00:21:16.711 sys 0m0.120s 00:21:16.711 ************************************ 00:21:16.711 END TEST bdev_json_nonenclosed 00:21:16.711 ************************************ 00:21:16.711 23:02:54 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:16.711 23:02:54 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:21:16.711 23:02:55 blockdev_nvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:16.711 23:02:55 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:21:16.711 23:02:55 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:16.711 23:02:55 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:21:16.711 ************************************ 00:21:16.711 START TEST bdev_json_nonarray 00:21:16.711 ************************************ 00:21:16.711 23:02:55 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:16.711 [2024-12-09 23:02:55.137303] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:21:16.711 [2024-12-09 23:02:55.137932] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60926 ] 00:21:16.970 [2024-12-09 23:02:55.303419] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:17.230 [2024-12-09 23:02:55.448856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:17.230 [2024-12-09 23:02:55.448996] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:21:17.230 [2024-12-09 23:02:55.449016] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:21:17.230 [2024-12-09 23:02:55.449027] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:21:17.230 00:21:17.230 real 0m0.597s 00:21:17.230 user 0m0.367s 00:21:17.230 sys 0m0.122s 00:21:17.230 23:02:55 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:17.230 23:02:55 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:21:17.230 ************************************ 00:21:17.230 END TEST bdev_json_nonarray 00:21:17.230 ************************************ 00:21:17.490 23:02:55 blockdev_nvme -- bdev/blockdev.sh@824 -- # [[ nvme == bdev ]] 00:21:17.490 23:02:55 blockdev_nvme -- bdev/blockdev.sh@832 -- # [[ nvme == gpt ]] 00:21:17.490 23:02:55 blockdev_nvme -- bdev/blockdev.sh@836 -- # [[ nvme == crypto_sw ]] 00:21:17.490 23:02:55 blockdev_nvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:21:17.490 23:02:55 blockdev_nvme -- bdev/blockdev.sh@849 -- # cleanup 00:21:17.490 23:02:55 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:21:17.490 23:02:55 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:21:17.490 23:02:55 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:21:17.490 23:02:55 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:21:17.490 23:02:55 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:21:17.490 23:02:55 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:21:17.490 00:21:17.490 real 0m43.782s 00:21:17.490 user 1m5.087s 00:21:17.490 sys 0m7.301s 00:21:17.490 ************************************ 00:21:17.490 END TEST blockdev_nvme 00:21:17.490 ************************************ 00:21:17.490 23:02:55 blockdev_nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:17.490 23:02:55 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:21:17.490 23:02:55 -- spdk/autotest.sh@209 -- # uname -s 00:21:17.490 23:02:55 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:21:17.490 23:02:55 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:21:17.490 23:02:55 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:17.490 23:02:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:17.490 23:02:55 -- common/autotest_common.sh@10 -- # set +x 00:21:17.490 ************************************ 00:21:17.490 START TEST blockdev_nvme_gpt 00:21:17.490 ************************************ 00:21:17.490 23:02:55 blockdev_nvme_gpt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:21:17.490 * Looking for test storage... 00:21:17.490 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:21:17.490 23:02:55 blockdev_nvme_gpt -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:17.490 23:02:55 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # lcov --version 00:21:17.490 23:02:55 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:17.490 23:02:55 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:17.490 23:02:55 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:17.490 23:02:55 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:17.490 23:02:55 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:17.490 23:02:55 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:21:17.490 23:02:55 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:21:17.490 23:02:55 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:21:17.490 23:02:55 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:21:17.490 23:02:55 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:21:17.490 23:02:55 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:21:17.490 23:02:55 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:21:17.490 23:02:55 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:17.490 23:02:55 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:21:17.490 23:02:55 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:21:17.490 23:02:55 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:17.490 23:02:55 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:17.490 23:02:55 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:21:17.490 23:02:55 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:21:17.490 23:02:55 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:17.490 23:02:55 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:21:17.751 23:02:55 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:21:17.751 23:02:55 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:21:17.751 23:02:55 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:21:17.751 23:02:55 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:17.751 23:02:55 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:21:17.751 23:02:55 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:21:17.751 23:02:55 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:17.751 23:02:55 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:17.751 23:02:55 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:21:17.751 23:02:55 blockdev_nvme_gpt -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:17.751 23:02:55 blockdev_nvme_gpt -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:17.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:17.751 --rc genhtml_branch_coverage=1 00:21:17.751 --rc genhtml_function_coverage=1 00:21:17.751 --rc genhtml_legend=1 00:21:17.751 --rc geninfo_all_blocks=1 00:21:17.751 --rc geninfo_unexecuted_blocks=1 00:21:17.751 00:21:17.751 ' 00:21:17.751 23:02:55 blockdev_nvme_gpt -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:17.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:17.751 --rc genhtml_branch_coverage=1 00:21:17.751 --rc genhtml_function_coverage=1 00:21:17.751 --rc genhtml_legend=1 00:21:17.751 --rc geninfo_all_blocks=1 00:21:17.751 --rc geninfo_unexecuted_blocks=1 00:21:17.751 00:21:17.751 ' 00:21:17.751 23:02:55 blockdev_nvme_gpt -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:17.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:17.751 --rc genhtml_branch_coverage=1 00:21:17.751 --rc genhtml_function_coverage=1 00:21:17.751 --rc genhtml_legend=1 00:21:17.751 --rc geninfo_all_blocks=1 00:21:17.751 --rc geninfo_unexecuted_blocks=1 00:21:17.751 00:21:17.751 ' 00:21:17.751 23:02:55 blockdev_nvme_gpt -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:17.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:17.751 --rc genhtml_branch_coverage=1 00:21:17.751 --rc genhtml_function_coverage=1 00:21:17.751 --rc genhtml_legend=1 00:21:17.751 --rc geninfo_all_blocks=1 00:21:17.751 --rc geninfo_unexecuted_blocks=1 00:21:17.751 00:21:17.751 ' 00:21:17.751 23:02:55 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:21:17.751 23:02:55 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:21:17.751 23:02:55 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:21:17.751 23:02:55 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:21:17.751 23:02:55 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:21:17.751 23:02:55 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:21:17.751 23:02:55 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:21:17.751 23:02:55 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:21:17.751 23:02:55 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:21:17.751 23:02:55 blockdev_nvme_gpt -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:21:17.751 23:02:55 blockdev_nvme_gpt -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:21:17.751 23:02:55 blockdev_nvme_gpt -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:21:17.751 23:02:55 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # uname -s 00:21:17.751 23:02:55 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:21:17.751 23:02:55 blockdev_nvme_gpt -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:21:17.751 23:02:55 blockdev_nvme_gpt -- bdev/blockdev.sh@719 -- # test_type=gpt 00:21:17.751 23:02:55 blockdev_nvme_gpt -- bdev/blockdev.sh@720 -- # crypto_device= 00:21:17.751 23:02:55 blockdev_nvme_gpt -- bdev/blockdev.sh@721 -- # dek= 00:21:17.751 23:02:55 blockdev_nvme_gpt -- bdev/blockdev.sh@722 -- # env_ctx= 00:21:17.751 23:02:55 blockdev_nvme_gpt -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:21:17.751 23:02:55 blockdev_nvme_gpt -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:21:17.751 23:02:55 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == bdev ]] 00:21:17.751 23:02:55 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == crypto_* ]] 00:21:17.751 23:02:55 blockdev_nvme_gpt -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:21:17.751 23:02:55 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=61010 00:21:17.751 23:02:55 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:21:17.751 23:02:55 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 61010 00:21:17.751 23:02:55 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # '[' -z 61010 ']' 00:21:17.751 23:02:55 blockdev_nvme_gpt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:17.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:17.751 23:02:55 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:17.751 23:02:55 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:17.751 23:02:55 blockdev_nvme_gpt -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:17.751 23:02:55 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:21:17.751 23:02:55 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:21:17.752 [2024-12-09 23:02:56.062071] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:21:17.752 [2024-12-09 23:02:56.062252] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61010 ] 00:21:18.012 [2024-12-09 23:02:56.229638] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:18.012 [2024-12-09 23:02:56.376199] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:18.955 23:02:57 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:18.955 23:02:57 blockdev_nvme_gpt -- common/autotest_common.sh@868 -- # return 0 00:21:18.955 23:02:57 blockdev_nvme_gpt -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:21:18.955 23:02:57 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # setup_gpt_conf 00:21:18.955 23:02:57 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:21:19.215 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:19.215 Waiting for block devices as requested 00:21:19.483 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:21:19.483 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:21:19.483 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:21:19.745 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:21:25.037 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:21:25.037 23:03:03 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:21:25.037 23:03:03 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:21:25.037 23:03:03 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:21:25.037 23:03:03 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:21:25.037 23:03:03 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:21:25.037 23:03:03 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:21:25.037 23:03:03 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:21:25.037 23:03:03 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:21:25.037 23:03:03 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:21:25.037 23:03:03 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:21:25.037 23:03:03 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:21:25.037 23:03:03 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:21:25.037 23:03:03 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:21:25.037 23:03:03 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:21:25.037 23:03:03 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:21:25.037 23:03:03 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:21:25.037 23:03:03 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:21:25.037 23:03:03 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:21:25.037 23:03:03 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:21:25.037 23:03:03 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:21:25.037 23:03:03 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:21:25.037 23:03:03 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0 00:21:25.037 23:03:03 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:21:25.037 23:03:03 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n1 00:21:25.037 23:03:03 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:21:25.037 23:03:03 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:21:25.037 23:03:03 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:21:25.037 23:03:03 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:21:25.037 23:03:03 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n2 00:21:25.037 23:03:03 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:21:25.037 23:03:03 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:21:25.037 23:03:03 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:21:25.037 23:03:03 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:21:25.037 23:03:03 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n3 00:21:25.037 23:03:03 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:21:25.037 23:03:03 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:21:25.037 23:03:03 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:21:25.037 23:03:03 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:21:25.037 23:03:03 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0 00:21:25.037 23:03:03 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:21:25.037 23:03:03 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3c3n1 00:21:25.037 23:03:03 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:21:25.037 23:03:03 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:21:25.037 23:03:03 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:21:25.037 23:03:03 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:21:25.037 23:03:03 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:21:25.037 23:03:03 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:21:25.037 23:03:03 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:21:25.037 23:03:03 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:21:25.037 23:03:03 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:21:25.037 23:03:03 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:21:25.037 23:03:03 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:21:25.037 BYT; 00:21:25.037 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:21:25.037 23:03:03 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:21:25.037 BYT; 00:21:25.037 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:21:25.037 23:03:03 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:21:25.037 23:03:03 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:21:25.037 23:03:03 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:21:25.037 23:03:03 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:21:25.037 23:03:03 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:21:25.037 23:03:03 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:21:25.037 23:03:03 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:21:25.037 23:03:03 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:21:25.037 23:03:03 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:21:25.037 23:03:03 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:21:25.037 23:03:03 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:21:25.037 23:03:03 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:21:25.037 23:03:03 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:21:25.037 23:03:03 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:21:25.037 23:03:03 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:21:25.037 23:03:03 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:21:25.037 23:03:03 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:21:25.037 23:03:03 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:21:25.037 23:03:03 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:21:25.037 23:03:03 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:21:25.037 23:03:03 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:21:25.038 23:03:03 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:21:25.038 23:03:03 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:21:25.038 23:03:03 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:21:25.038 23:03:03 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:21:25.038 23:03:03 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:21:25.038 23:03:03 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:21:25.038 23:03:03 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:21:25.038 23:03:03 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:21:25.981 The operation has completed successfully. 00:21:25.981 23:03:04 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:21:26.923 The operation has completed successfully. 00:21:26.923 23:03:05 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:27.493 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:28.065 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:21:28.065 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:21:28.065 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:21:28.065 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:21:28.355 23:03:06 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:21:28.355 23:03:06 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.355 23:03:06 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:21:28.355 [] 00:21:28.355 23:03:06 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.355 23:03:06 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:21:28.355 23:03:06 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:21:28.355 23:03:06 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:21:28.355 23:03:06 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:21:28.355 23:03:06 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:21:28.355 23:03:06 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.355 23:03:06 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:21:28.618 23:03:06 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.618 23:03:06 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:21:28.618 23:03:06 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.618 23:03:06 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:21:28.618 23:03:06 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.618 23:03:06 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # cat 00:21:28.618 23:03:06 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:21:28.618 23:03:06 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.618 23:03:06 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:21:28.618 23:03:06 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.618 23:03:06 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:21:28.618 23:03:06 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.618 23:03:06 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:21:28.618 23:03:06 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.618 23:03:06 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:21:28.618 23:03:06 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.618 23:03:06 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:21:28.618 23:03:06 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.618 23:03:07 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:21:28.618 23:03:07 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:21:28.618 23:03:07 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:21:28.618 23:03:07 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.618 23:03:07 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:21:28.618 23:03:07 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.618 23:03:07 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:21:28.618 23:03:07 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # jq -r .name 00:21:28.619 23:03:07 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "effb6a52-f1f5-4282-833e-687de244ebd1"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "effb6a52-f1f5-4282-833e-687de244ebd1",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "1a6a1bf9-6639-40a5-a1c6-b8e91e1646c0"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "1a6a1bf9-6639-40a5-a1c6-b8e91e1646c0",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "82957d49-9349-49a0-96d5-99b3a4be3012"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "82957d49-9349-49a0-96d5-99b3a4be3012",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "7b7a1562-27fa-40d2-aca5-ef4a21e6d8fa"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "7b7a1562-27fa-40d2-aca5-ef4a21e6d8fa",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "ab0db3c8-0527-49aa-bcd7-d4ca2130fcb4"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "ab0db3c8-0527-49aa-bcd7-d4ca2130fcb4",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:21:28.880 23:03:07 blockdev_nvme_gpt -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:21:28.880 23:03:07 blockdev_nvme_gpt -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:21:28.880 23:03:07 blockdev_nvme_gpt -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:21:28.880 23:03:07 blockdev_nvme_gpt -- bdev/blockdev.sh@791 -- # killprocess 61010 00:21:28.880 23:03:07 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # '[' -z 61010 ']' 00:21:28.880 23:03:07 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # kill -0 61010 00:21:28.880 23:03:07 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # uname 00:21:28.880 23:03:07 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:28.880 23:03:07 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61010 00:21:28.880 23:03:07 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:28.880 23:03:07 blockdev_nvme_gpt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:28.880 killing process with pid 61010 00:21:28.880 23:03:07 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61010' 00:21:28.880 23:03:07 blockdev_nvme_gpt -- common/autotest_common.sh@973 -- # kill 61010 00:21:28.880 23:03:07 blockdev_nvme_gpt -- common/autotest_common.sh@978 -- # wait 61010 00:21:30.806 23:03:08 blockdev_nvme_gpt -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:21:30.806 23:03:08 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:21:30.806 23:03:08 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:21:30.806 23:03:08 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:30.806 23:03:08 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:21:30.806 ************************************ 00:21:30.806 START TEST bdev_hello_world 00:21:30.806 ************************************ 00:21:30.806 23:03:08 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:21:30.806 [2024-12-09 23:03:08.939043] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:21:30.806 [2024-12-09 23:03:08.939211] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61642 ] 00:21:30.806 [2024-12-09 23:03:09.099904] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:30.806 [2024-12-09 23:03:09.245803] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:31.434 [2024-12-09 23:03:09.863109] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:21:31.434 [2024-12-09 23:03:09.863175] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:21:31.434 [2024-12-09 23:03:09.863209] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:21:31.434 [2024-12-09 23:03:09.866198] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:21:31.434 [2024-12-09 23:03:09.867587] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:21:31.434 [2024-12-09 23:03:09.867627] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:21:31.434 [2024-12-09 23:03:09.867794] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:21:31.434 00:21:31.434 [2024-12-09 23:03:09.867819] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:21:32.380 00:21:32.380 real 0m1.868s 00:21:32.380 user 0m1.484s 00:21:32.380 sys 0m0.269s 00:21:32.380 23:03:10 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:32.380 ************************************ 00:21:32.380 END TEST bdev_hello_world 00:21:32.380 ************************************ 00:21:32.380 23:03:10 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:21:32.380 23:03:10 blockdev_nvme_gpt -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:21:32.380 23:03:10 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:32.380 23:03:10 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:32.380 23:03:10 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:21:32.380 ************************************ 00:21:32.380 START TEST bdev_bounds 00:21:32.380 ************************************ 00:21:32.380 23:03:10 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:21:32.380 23:03:10 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=61678 00:21:32.380 23:03:10 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:21:32.380 Process bdevio pid: 61678 00:21:32.380 23:03:10 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 61678' 00:21:32.380 23:03:10 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 61678 00:21:32.380 23:03:10 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 61678 ']' 00:21:32.380 23:03:10 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:32.380 23:03:10 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:21:32.380 23:03:10 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:32.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:32.380 23:03:10 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:32.380 23:03:10 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:32.380 23:03:10 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:21:32.643 [2024-12-09 23:03:10.873969] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:21:32.643 [2024-12-09 23:03:10.874116] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61678 ] 00:21:32.643 [2024-12-09 23:03:11.043800] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:32.904 [2024-12-09 23:03:11.225388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:32.904 [2024-12-09 23:03:11.225606] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:32.904 [2024-12-09 23:03:11.225606] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:33.847 23:03:12 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:33.847 23:03:12 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:21:33.847 23:03:12 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:21:33.847 I/O targets: 00:21:33.847 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:21:33.847 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:21:33.847 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:21:33.847 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:21:33.847 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:21:33.847 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:21:33.847 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:21:33.847 00:21:33.847 00:21:33.847 CUnit - A unit testing framework for C - Version 2.1-3 00:21:33.847 http://cunit.sourceforge.net/ 00:21:33.847 00:21:33.847 00:21:33.847 Suite: bdevio tests on: Nvme3n1 00:21:33.847 Test: blockdev write read block ...passed 00:21:33.847 Test: blockdev write zeroes read block ...passed 00:21:33.847 Test: blockdev write zeroes read no split ...passed 00:21:33.847 Test: blockdev write zeroes read split ...passed 00:21:33.847 Test: blockdev write zeroes read split partial ...passed 00:21:33.847 Test: blockdev reset ...[2024-12-09 23:03:12.180538] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:21:33.847 [2024-12-09 23:03:12.186476] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:21:33.847 passed 00:21:33.847 Test: blockdev write read 8 blocks ...passed 00:21:33.847 Test: blockdev write read size > 128k ...passed 00:21:33.847 Test: blockdev write read invalid size ...passed 00:21:33.847 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:33.847 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:33.847 Test: blockdev write read max offset ...passed 00:21:33.847 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:33.847 Test: blockdev writev readv 8 blocks ...passed 00:21:33.847 Test: blockdev writev readv 30 x 1block ...passed 00:21:33.847 Test: blockdev writev readv block ...passed 00:21:33.847 Test: blockdev writev readv size > 128k ...passed 00:21:33.847 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:33.847 Test: blockdev comparev and writev ...[2024-12-09 23:03:12.210412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b4c04000 len:0x1000 00:21:33.847 [2024-12-09 23:03:12.210491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:21:33.847 passed 00:21:33.847 Test: blockdev nvme passthru rw ...passed 00:21:33.847 Test: blockdev nvme passthru vendor specific ...[2024-12-09 23:03:12.212839] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:21:33.847 [2024-12-09 23:03:12.212894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:21:33.847 passed 00:21:33.847 Test: blockdev nvme admin passthru ...passed 00:21:33.847 Test: blockdev copy ...passed 00:21:33.847 Suite: bdevio tests on: Nvme2n3 00:21:33.847 Test: blockdev write read block ...passed 00:21:33.847 Test: blockdev write zeroes read block ...passed 00:21:33.847 Test: blockdev write zeroes read no split ...passed 00:21:33.847 Test: blockdev write zeroes read split ...passed 00:21:33.847 Test: blockdev write zeroes read split partial ...passed 00:21:33.847 Test: blockdev reset ...[2024-12-09 23:03:12.285075] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:21:33.847 [2024-12-09 23:03:12.291033] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:21:33.847 passed 00:21:33.847 Test: blockdev write read 8 blocks ...passed 00:21:33.847 Test: blockdev write read size > 128k ...passed 00:21:33.847 Test: blockdev write read invalid size ...passed 00:21:33.847 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:33.847 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:33.847 Test: blockdev write read max offset ...passed 00:21:33.847 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:33.847 Test: blockdev writev readv 8 blocks ...passed 00:21:33.847 Test: blockdev writev readv 30 x 1block ...passed 00:21:33.847 Test: blockdev writev readv block ...passed 00:21:34.109 Test: blockdev writev readv size > 128k ...passed 00:21:34.109 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:34.109 Test: blockdev comparev and writev ...[2024-12-09 23:03:12.315548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b4c02000 len:0x1000 00:21:34.109 [2024-12-09 23:03:12.315626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:21:34.109 passed 00:21:34.109 Test: blockdev nvme passthru rw ...passed 00:21:34.109 Test: blockdev nvme passthru vendor specific ...[2024-12-09 23:03:12.318655] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:21:34.109 [2024-12-09 23:03:12.318709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:21:34.109 passed 00:21:34.109 Test: blockdev nvme admin passthru ...passed 00:21:34.109 Test: blockdev copy ...passed 00:21:34.109 Suite: bdevio tests on: Nvme2n2 00:21:34.109 Test: blockdev write read block ...passed 00:21:34.109 Test: blockdev write zeroes read block ...passed 00:21:34.109 Test: blockdev write zeroes read no split ...passed 00:21:34.109 Test: blockdev write zeroes read split ...passed 00:21:34.109 Test: blockdev write zeroes read split partial ...passed 00:21:34.109 Test: blockdev reset ...[2024-12-09 23:03:12.389053] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:21:34.109 [2024-12-09 23:03:12.398110] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:21:34.109 passed 00:21:34.109 Test: blockdev write read 8 blocks ...passed 00:21:34.109 Test: blockdev write read size > 128k ...passed 00:21:34.109 Test: blockdev write read invalid size ...passed 00:21:34.109 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:34.109 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:34.109 Test: blockdev write read max offset ...passed 00:21:34.109 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:34.109 Test: blockdev writev readv 8 blocks ...passed 00:21:34.109 Test: blockdev writev readv 30 x 1block ...passed 00:21:34.109 Test: blockdev writev readv block ...passed 00:21:34.109 Test: blockdev writev readv size > 128k ...passed 00:21:34.109 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:34.109 Test: blockdev comparev and writev ...[2024-12-09 23:03:12.420405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d5a38000 len:0x1000 00:21:34.109 [2024-12-09 23:03:12.420468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:21:34.109 passed 00:21:34.109 Test: blockdev nvme passthru rw ...passed 00:21:34.109 Test: blockdev nvme passthru vendor specific ...[2024-12-09 23:03:12.423495] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:21:34.109 [2024-12-09 23:03:12.423544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:21:34.109 passed 00:21:34.109 Test: blockdev nvme admin passthru ...passed 00:21:34.109 Test: blockdev copy ...passed 00:21:34.109 Suite: bdevio tests on: Nvme2n1 00:21:34.109 Test: blockdev write read block ...passed 00:21:34.109 Test: blockdev write zeroes read block ...passed 00:21:34.109 Test: blockdev write zeroes read no split ...passed 00:21:34.109 Test: blockdev write zeroes read split ...passed 00:21:34.109 Test: blockdev write zeroes read split partial ...passed 00:21:34.109 Test: blockdev reset ...[2024-12-09 23:03:12.510213] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:21:34.109 [2024-12-09 23:03:12.516660] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:21:34.109 passed 00:21:34.109 Test: blockdev write read 8 blocks ...passed 00:21:34.109 Test: blockdev write read size > 128k ...passed 00:21:34.109 Test: blockdev write read invalid size ...passed 00:21:34.109 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:34.109 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:34.109 Test: blockdev write read max offset ...passed 00:21:34.109 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:34.109 Test: blockdev writev readv 8 blocks ...passed 00:21:34.109 Test: blockdev writev readv 30 x 1block ...passed 00:21:34.109 Test: blockdev writev readv block ...passed 00:21:34.109 Test: blockdev writev readv size > 128k ...passed 00:21:34.109 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:34.109 Test: blockdev comparev and writev ...[2024-12-09 23:03:12.538369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d5a34000 len:0x1000 00:21:34.109 [2024-12-09 23:03:12.538429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:21:34.109 passed 00:21:34.109 Test: blockdev nvme passthru rw ...passed 00:21:34.109 Test: blockdev nvme passthru vendor specific ...[2024-12-09 23:03:12.540274] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:21:34.109 [2024-12-09 23:03:12.540310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:21:34.109 passed 00:21:34.109 Test: blockdev nvme admin passthru ...passed 00:21:34.109 Test: blockdev copy ...passed 00:21:34.109 Suite: bdevio tests on: Nvme1n1p2 00:21:34.109 Test: blockdev write read block ...passed 00:21:34.109 Test: blockdev write zeroes read block ...passed 00:21:34.109 Test: blockdev write zeroes read no split ...passed 00:21:34.370 Test: blockdev write zeroes read split ...passed 00:21:34.370 Test: blockdev write zeroes read split partial ...passed 00:21:34.370 Test: blockdev reset ...[2024-12-09 23:03:12.603625] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:21:34.370 [2024-12-09 23:03:12.610535] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:21:34.370 passed 00:21:34.370 Test: blockdev write read 8 blocks ...passed 00:21:34.370 Test: blockdev write read size > 128k ...passed 00:21:34.370 Test: blockdev write read invalid size ...passed 00:21:34.370 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:34.370 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:34.370 Test: blockdev write read max offset ...passed 00:21:34.370 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:34.370 Test: blockdev writev readv 8 blocks ...passed 00:21:34.370 Test: blockdev writev readv 30 x 1block ...passed 00:21:34.370 Test: blockdev writev readv block ...passed 00:21:34.370 Test: blockdev writev readv size > 128k ...passed 00:21:34.370 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:34.370 Test: blockdev comparev and writev ...[2024-12-09 23:03:12.634980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2d5a30000 len:0x1000 00:21:34.370 [2024-12-09 23:03:12.635039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:21:34.370 passed 00:21:34.370 Test: blockdev nvme passthru rw ...passed 00:21:34.370 Test: blockdev nvme passthru vendor specific ...passed 00:21:34.370 Test: blockdev nvme admin passthru ...passed 00:21:34.370 Test: blockdev copy ...passed 00:21:34.370 Suite: bdevio tests on: Nvme1n1p1 00:21:34.370 Test: blockdev write read block ...passed 00:21:34.370 Test: blockdev write zeroes read block ...passed 00:21:34.370 Test: blockdev write zeroes read no split ...passed 00:21:34.370 Test: blockdev write zeroes read split ...passed 00:21:34.370 Test: blockdev write zeroes read split partial ...passed 00:21:34.370 Test: blockdev reset ...[2024-12-09 23:03:12.701437] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:21:34.370 [2024-12-09 23:03:12.706089] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:21:34.370 passed 00:21:34.370 Test: blockdev write read 8 blocks ...passed 00:21:34.370 Test: blockdev write read size > 128k ...passed 00:21:34.370 Test: blockdev write read invalid size ...passed 00:21:34.370 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:34.370 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:34.370 Test: blockdev write read max offset ...passed 00:21:34.370 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:34.370 Test: blockdev writev readv 8 blocks ...passed 00:21:34.370 Test: blockdev writev readv 30 x 1block ...passed 00:21:34.370 Test: blockdev writev readv block ...passed 00:21:34.370 Test: blockdev writev readv size > 128k ...passed 00:21:34.370 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:34.370 Test: blockdev comparev and writev ...[2024-12-09 23:03:12.728263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2b560e000 len:0x1000 00:21:34.370 [2024-12-09 23:03:12.728319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:21:34.370 passed 00:21:34.370 Test: blockdev nvme passthru rw ...passed 00:21:34.370 Test: blockdev nvme passthru vendor specific ...passed 00:21:34.371 Test: blockdev nvme admin passthru ...passed 00:21:34.371 Test: blockdev copy ...passed 00:21:34.371 Suite: bdevio tests on: Nvme0n1 00:21:34.371 Test: blockdev write read block ...passed 00:21:34.632 Test: blockdev write zeroes read block ...passed 00:21:34.632 Test: blockdev write zeroes read no split ...passed 00:21:34.632 Test: blockdev write zeroes read split ...passed 00:21:34.632 Test: blockdev write zeroes read split partial ...passed 00:21:34.632 Test: blockdev reset ...[2024-12-09 23:03:12.905271] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:21:34.632 [2024-12-09 23:03:12.910952] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:21:34.632 passed 00:21:34.632 Test: blockdev write read 8 blocks ...passed 00:21:34.632 Test: blockdev write read size > 128k ...passed 00:21:34.632 Test: blockdev write read invalid size ...passed 00:21:34.632 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:34.632 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:34.632 Test: blockdev write read max offset ...passed 00:21:34.632 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:34.632 Test: blockdev writev readv 8 blocks ...passed 00:21:34.632 Test: blockdev writev readv 30 x 1block ...passed 00:21:34.632 Test: blockdev writev readv block ...passed 00:21:34.632 Test: blockdev writev readv size > 128k ...passed 00:21:34.632 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:34.632 Test: blockdev comparev and writev ...passed 00:21:34.632 Test: blockdev nvme passthru rw ...[2024-12-09 23:03:12.932738] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:21:34.632 separate metadata which is not supported yet. 00:21:34.632 passed 00:21:34.632 Test: blockdev nvme passthru vendor specific ...[2024-12-09 23:03:12.934951] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:21:34.632 [2024-12-09 23:03:12.935031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:21:34.632 passed 00:21:34.632 Test: blockdev nvme admin passthru ...passed 00:21:34.632 Test: blockdev copy ...passed 00:21:34.632 00:21:34.632 Run Summary: Type Total Ran Passed Failed Inactive 00:21:34.632 suites 7 7 n/a 0 0 00:21:34.632 tests 161 161 161 0 0 00:21:34.632 asserts 1025 1025 1025 0 n/a 00:21:34.632 00:21:34.632 Elapsed time = 1.988 seconds 00:21:34.632 0 00:21:34.632 23:03:12 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 61678 00:21:34.632 23:03:12 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 61678 ']' 00:21:34.632 23:03:12 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 61678 00:21:34.632 23:03:12 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:21:34.632 23:03:12 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:34.632 23:03:12 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61678 00:21:34.632 23:03:12 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:34.632 23:03:12 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:34.632 killing process with pid 61678 00:21:34.632 23:03:12 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61678' 00:21:34.632 23:03:12 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@973 -- # kill 61678 00:21:34.632 23:03:12 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@978 -- # wait 61678 00:21:35.592 23:03:13 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:21:35.592 00:21:35.592 real 0m2.985s 00:21:35.592 user 0m7.264s 00:21:35.592 sys 0m0.467s 00:21:35.592 23:03:13 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:35.592 23:03:13 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:21:35.592 ************************************ 00:21:35.592 END TEST bdev_bounds 00:21:35.592 ************************************ 00:21:35.592 23:03:13 blockdev_nvme_gpt -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:21:35.592 23:03:13 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:21:35.592 23:03:13 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:35.592 23:03:13 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:21:35.593 ************************************ 00:21:35.593 START TEST bdev_nbd 00:21:35.593 ************************************ 00:21:35.593 23:03:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:21:35.593 23:03:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:21:35.593 23:03:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:21:35.593 23:03:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:35.593 23:03:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:21:35.593 23:03:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:21:35.593 23:03:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:21:35.593 23:03:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:21:35.593 23:03:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:21:35.593 23:03:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:21:35.593 23:03:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:21:35.593 23:03:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:21:35.593 23:03:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:21:35.593 23:03:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:21:35.593 23:03:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:21:35.593 23:03:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:21:35.593 23:03:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=61743 00:21:35.593 23:03:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:21:35.593 23:03:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 61743 /var/tmp/spdk-nbd.sock 00:21:35.593 23:03:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 61743 ']' 00:21:35.593 23:03:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:21:35.593 23:03:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:35.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:21:35.593 23:03:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:21:35.593 23:03:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:21:35.593 23:03:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:35.593 23:03:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:21:35.593 [2024-12-09 23:03:13.936693] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:21:35.593 [2024-12-09 23:03:13.936852] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:35.854 [2024-12-09 23:03:14.123707] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:36.115 [2024-12-09 23:03:14.321457] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:36.688 23:03:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:36.688 23:03:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:21:36.688 23:03:14 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:21:36.688 23:03:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:36.688 23:03:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:21:36.688 23:03:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:21:36.688 23:03:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:21:36.688 23:03:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:36.688 23:03:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:21:36.688 23:03:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:21:36.688 23:03:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:21:36.688 23:03:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:21:36.688 23:03:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:21:36.688 23:03:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:21:36.688 23:03:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:21:36.976 23:03:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:21:36.976 23:03:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:21:36.976 23:03:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:21:36.976 23:03:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:21:36.976 23:03:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:21:36.976 23:03:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:36.976 23:03:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:36.976 23:03:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:21:36.976 23:03:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:21:36.976 23:03:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:36.976 23:03:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:36.976 23:03:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:36.976 1+0 records in 00:21:36.976 1+0 records out 00:21:36.976 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000979756 s, 4.2 MB/s 00:21:36.976 23:03:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:36.976 23:03:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:21:36.976 23:03:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:36.976 23:03:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:36.976 23:03:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:21:36.976 23:03:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:21:36.977 23:03:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:21:36.977 23:03:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:21:37.239 23:03:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:21:37.239 23:03:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:21:37.239 23:03:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:21:37.239 23:03:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:21:37.239 23:03:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:21:37.239 23:03:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:37.239 23:03:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:37.239 23:03:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:21:37.239 23:03:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:21:37.239 23:03:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:37.239 23:03:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:37.239 23:03:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:37.239 1+0 records in 00:21:37.239 1+0 records out 00:21:37.239 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000715368 s, 5.7 MB/s 00:21:37.239 23:03:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:37.239 23:03:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:21:37.239 23:03:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:37.239 23:03:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:37.239 23:03:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:21:37.239 23:03:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:21:37.239 23:03:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:21:37.239 23:03:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:21:37.500 23:03:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:21:37.500 23:03:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:21:37.500 23:03:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:21:37.500 23:03:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:21:37.500 23:03:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:21:37.500 23:03:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:37.500 23:03:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:37.500 23:03:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:21:37.500 23:03:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:21:37.500 23:03:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:37.500 23:03:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:37.500 23:03:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:37.500 1+0 records in 00:21:37.500 1+0 records out 00:21:37.500 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00122151 s, 3.4 MB/s 00:21:37.500 23:03:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:37.500 23:03:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:21:37.500 23:03:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:37.500 23:03:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:37.500 23:03:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:21:37.500 23:03:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:21:37.500 23:03:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:21:37.500 23:03:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:21:37.762 23:03:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:21:37.762 23:03:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:21:37.762 23:03:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:21:37.762 23:03:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:21:37.762 23:03:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:21:37.762 23:03:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:37.762 23:03:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:37.762 23:03:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:21:37.762 23:03:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:21:37.762 23:03:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:37.762 23:03:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:37.763 23:03:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:37.763 1+0 records in 00:21:37.763 1+0 records out 00:21:37.763 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000970238 s, 4.2 MB/s 00:21:37.763 23:03:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:37.763 23:03:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:21:37.763 23:03:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:37.763 23:03:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:37.763 23:03:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:21:37.763 23:03:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:21:37.763 23:03:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:21:37.763 23:03:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:21:38.024 23:03:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:21:38.024 23:03:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:21:38.024 23:03:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:21:38.024 23:03:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:21:38.024 23:03:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:21:38.024 23:03:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:38.024 23:03:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:38.024 23:03:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:21:38.024 23:03:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:21:38.024 23:03:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:38.024 23:03:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:38.024 23:03:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:38.024 1+0 records in 00:21:38.024 1+0 records out 00:21:38.024 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00128044 s, 3.2 MB/s 00:21:38.024 23:03:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:38.024 23:03:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:21:38.024 23:03:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:38.024 23:03:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:38.024 23:03:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:21:38.024 23:03:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:21:38.024 23:03:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:21:38.024 23:03:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:21:38.285 23:03:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:21:38.285 23:03:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:21:38.285 23:03:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:21:38.285 23:03:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:21:38.285 23:03:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:21:38.285 23:03:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:38.285 23:03:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:38.285 23:03:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:21:38.285 23:03:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:21:38.285 23:03:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:38.285 23:03:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:38.285 23:03:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:38.285 1+0 records in 00:21:38.285 1+0 records out 00:21:38.285 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00107539 s, 3.8 MB/s 00:21:38.285 23:03:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:38.285 23:03:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:21:38.285 23:03:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:38.285 23:03:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:38.285 23:03:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:21:38.285 23:03:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:21:38.285 23:03:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:21:38.285 23:03:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:21:38.546 23:03:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:21:38.546 23:03:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:21:38.546 23:03:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:21:38.546 23:03:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd6 00:21:38.546 23:03:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:21:38.546 23:03:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:38.546 23:03:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:38.546 23:03:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd6 /proc/partitions 00:21:38.546 23:03:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:21:38.546 23:03:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:38.546 23:03:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:38.546 23:03:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:38.546 1+0 records in 00:21:38.546 1+0 records out 00:21:38.546 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000860284 s, 4.8 MB/s 00:21:38.546 23:03:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:38.546 23:03:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:21:38.546 23:03:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:38.546 23:03:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:38.546 23:03:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:21:38.546 23:03:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:21:38.546 23:03:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:21:38.546 23:03:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:21:38.807 23:03:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:21:38.807 { 00:21:38.807 "nbd_device": "/dev/nbd0", 00:21:38.807 "bdev_name": "Nvme0n1" 00:21:38.807 }, 00:21:38.807 { 00:21:38.807 "nbd_device": "/dev/nbd1", 00:21:38.807 "bdev_name": "Nvme1n1p1" 00:21:38.807 }, 00:21:38.807 { 00:21:38.807 "nbd_device": "/dev/nbd2", 00:21:38.807 "bdev_name": "Nvme1n1p2" 00:21:38.807 }, 00:21:38.807 { 00:21:38.807 "nbd_device": "/dev/nbd3", 00:21:38.807 "bdev_name": "Nvme2n1" 00:21:38.807 }, 00:21:38.807 { 00:21:38.807 "nbd_device": "/dev/nbd4", 00:21:38.807 "bdev_name": "Nvme2n2" 00:21:38.807 }, 00:21:38.807 { 00:21:38.807 "nbd_device": "/dev/nbd5", 00:21:38.807 "bdev_name": "Nvme2n3" 00:21:38.807 }, 00:21:38.807 { 00:21:38.807 "nbd_device": "/dev/nbd6", 00:21:38.807 "bdev_name": "Nvme3n1" 00:21:38.807 } 00:21:38.807 ]' 00:21:38.807 23:03:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:21:38.807 23:03:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:21:38.807 23:03:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:21:38.807 { 00:21:38.807 "nbd_device": "/dev/nbd0", 00:21:38.807 "bdev_name": "Nvme0n1" 00:21:38.807 }, 00:21:38.807 { 00:21:38.807 "nbd_device": "/dev/nbd1", 00:21:38.807 "bdev_name": "Nvme1n1p1" 00:21:38.807 }, 00:21:38.807 { 00:21:38.807 "nbd_device": "/dev/nbd2", 00:21:38.807 "bdev_name": "Nvme1n1p2" 00:21:38.807 }, 00:21:38.807 { 00:21:38.807 "nbd_device": "/dev/nbd3", 00:21:38.807 "bdev_name": "Nvme2n1" 00:21:38.807 }, 00:21:38.807 { 00:21:38.807 "nbd_device": "/dev/nbd4", 00:21:38.807 "bdev_name": "Nvme2n2" 00:21:38.807 }, 00:21:38.807 { 00:21:38.807 "nbd_device": "/dev/nbd5", 00:21:38.807 "bdev_name": "Nvme2n3" 00:21:38.807 }, 00:21:38.807 { 00:21:38.807 "nbd_device": "/dev/nbd6", 00:21:38.807 "bdev_name": "Nvme3n1" 00:21:38.807 } 00:21:38.807 ]' 00:21:38.807 23:03:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:21:38.807 23:03:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:38.807 23:03:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:21:38.807 23:03:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:38.807 23:03:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:21:38.807 23:03:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:38.807 23:03:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:21:39.069 23:03:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:39.069 23:03:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:39.069 23:03:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:39.069 23:03:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:39.069 23:03:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:39.069 23:03:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:39.069 23:03:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:21:39.069 23:03:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:21:39.069 23:03:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:39.069 23:03:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:21:39.330 23:03:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:39.330 23:03:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:39.330 23:03:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:39.330 23:03:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:39.330 23:03:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:39.330 23:03:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:39.330 23:03:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:21:39.330 23:03:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:21:39.330 23:03:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:39.330 23:03:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:21:39.591 23:03:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:21:39.591 23:03:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:21:39.591 23:03:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:21:39.591 23:03:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:39.591 23:03:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:39.591 23:03:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:21:39.591 23:03:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:21:39.591 23:03:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:21:39.591 23:03:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:39.591 23:03:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:21:39.852 23:03:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:21:39.852 23:03:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:21:39.852 23:03:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:21:39.852 23:03:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:39.852 23:03:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:39.852 23:03:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:21:39.852 23:03:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:21:39.852 23:03:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:21:39.852 23:03:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:39.852 23:03:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:21:40.113 23:03:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:21:40.113 23:03:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:21:40.113 23:03:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:21:40.113 23:03:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:40.113 23:03:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:40.113 23:03:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:21:40.113 23:03:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:21:40.113 23:03:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:21:40.113 23:03:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:40.113 23:03:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:21:40.113 23:03:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:21:40.113 23:03:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:21:40.113 23:03:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:21:40.113 23:03:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:40.113 23:03:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:40.374 23:03:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:21:40.374 23:03:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:21:40.374 23:03:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:21:40.374 23:03:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:40.374 23:03:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:21:40.374 23:03:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:21:40.374 23:03:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:21:40.374 23:03:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:21:40.374 23:03:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:40.374 23:03:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:40.374 23:03:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:21:40.374 23:03:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:21:40.374 23:03:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:21:40.374 23:03:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:21:40.374 23:03:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:40.374 23:03:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:21:40.635 23:03:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:21:40.635 23:03:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:21:40.635 23:03:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:21:40.635 23:03:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:21:40.635 23:03:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:21:40.635 23:03:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:21:40.635 23:03:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:21:40.635 23:03:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:21:40.635 23:03:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:21:40.635 23:03:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:21:40.635 23:03:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:21:40.635 23:03:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:21:40.635 23:03:19 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:21:40.635 23:03:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:40.635 23:03:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:21:40.635 23:03:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:21:40.635 23:03:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:21:40.635 23:03:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:21:40.635 23:03:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:21:40.635 23:03:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:40.635 23:03:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:21:40.635 23:03:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:40.635 23:03:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:21:40.635 23:03:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:40.635 23:03:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:21:40.635 23:03:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:40.635 23:03:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:21:40.635 23:03:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:21:40.896 /dev/nbd0 00:21:40.896 23:03:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:41.157 23:03:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:41.157 23:03:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:21:41.157 23:03:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:21:41.157 23:03:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:41.157 23:03:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:41.157 23:03:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:21:41.157 23:03:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:21:41.157 23:03:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:41.157 23:03:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:41.157 23:03:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:41.157 1+0 records in 00:21:41.157 1+0 records out 00:21:41.157 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00134153 s, 3.1 MB/s 00:21:41.157 23:03:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:41.157 23:03:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:21:41.157 23:03:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:41.157 23:03:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:41.157 23:03:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:21:41.157 23:03:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:41.157 23:03:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:21:41.157 23:03:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:21:41.157 /dev/nbd1 00:21:41.418 23:03:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:41.418 23:03:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:41.418 23:03:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:21:41.418 23:03:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:21:41.418 23:03:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:41.418 23:03:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:41.418 23:03:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:21:41.418 23:03:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:21:41.418 23:03:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:41.418 23:03:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:41.418 23:03:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:41.418 1+0 records in 00:21:41.418 1+0 records out 00:21:41.418 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0013923 s, 2.9 MB/s 00:21:41.418 23:03:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:41.418 23:03:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:21:41.418 23:03:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:41.418 23:03:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:41.418 23:03:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:21:41.418 23:03:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:41.418 23:03:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:21:41.418 23:03:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:21:41.418 /dev/nbd10 00:21:41.678 23:03:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:21:41.678 23:03:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:21:41.679 23:03:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:21:41.679 23:03:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:21:41.679 23:03:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:41.679 23:03:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:41.679 23:03:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:21:41.679 23:03:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:21:41.679 23:03:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:41.679 23:03:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:41.679 23:03:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:41.679 1+0 records in 00:21:41.679 1+0 records out 00:21:41.679 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00161848 s, 2.5 MB/s 00:21:41.679 23:03:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:41.679 23:03:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:21:41.679 23:03:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:41.679 23:03:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:41.679 23:03:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:21:41.679 23:03:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:41.679 23:03:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:21:41.679 23:03:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:21:41.679 /dev/nbd11 00:21:41.939 23:03:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:21:41.939 23:03:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:21:41.939 23:03:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:21:41.939 23:03:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:21:41.939 23:03:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:41.939 23:03:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:41.939 23:03:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:21:41.939 23:03:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:21:41.939 23:03:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:41.939 23:03:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:41.939 23:03:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:41.939 1+0 records in 00:21:41.939 1+0 records out 00:21:41.939 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000911921 s, 4.5 MB/s 00:21:41.939 23:03:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:41.939 23:03:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:21:41.939 23:03:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:41.939 23:03:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:41.939 23:03:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:21:41.939 23:03:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:41.939 23:03:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:21:41.939 23:03:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:21:41.939 /dev/nbd12 00:21:42.201 23:03:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:21:42.201 23:03:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:21:42.201 23:03:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:21:42.201 23:03:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:21:42.201 23:03:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:42.201 23:03:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:42.201 23:03:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:21:42.201 23:03:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:21:42.201 23:03:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:42.201 23:03:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:42.201 23:03:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:42.201 1+0 records in 00:21:42.201 1+0 records out 00:21:42.201 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00166717 s, 2.5 MB/s 00:21:42.201 23:03:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:42.201 23:03:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:21:42.201 23:03:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:42.201 23:03:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:42.201 23:03:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:21:42.201 23:03:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:42.201 23:03:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:21:42.201 23:03:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:21:42.201 /dev/nbd13 00:21:42.463 23:03:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:21:42.463 23:03:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:21:42.463 23:03:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:21:42.463 23:03:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:21:42.463 23:03:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:42.463 23:03:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:42.463 23:03:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:21:42.463 23:03:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:21:42.463 23:03:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:42.463 23:03:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:42.463 23:03:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:42.463 1+0 records in 00:21:42.463 1+0 records out 00:21:42.463 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000936832 s, 4.4 MB/s 00:21:42.463 23:03:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:42.463 23:03:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:21:42.463 23:03:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:42.463 23:03:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:42.463 23:03:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:21:42.463 23:03:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:42.463 23:03:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:21:42.463 23:03:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:21:42.463 /dev/nbd14 00:21:42.722 23:03:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:21:42.722 23:03:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:21:42.722 23:03:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd14 00:21:42.722 23:03:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:21:42.722 23:03:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:42.722 23:03:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:42.723 23:03:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd14 /proc/partitions 00:21:42.723 23:03:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:21:42.723 23:03:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:42.723 23:03:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:42.723 23:03:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:42.723 1+0 records in 00:21:42.723 1+0 records out 00:21:42.723 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000844892 s, 4.8 MB/s 00:21:42.723 23:03:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:42.723 23:03:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:21:42.723 23:03:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:42.723 23:03:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:42.723 23:03:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:21:42.723 23:03:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:42.723 23:03:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:21:42.723 23:03:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:21:42.723 23:03:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:42.723 23:03:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:21:42.983 23:03:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:21:42.983 { 00:21:42.983 "nbd_device": "/dev/nbd0", 00:21:42.983 "bdev_name": "Nvme0n1" 00:21:42.983 }, 00:21:42.983 { 00:21:42.983 "nbd_device": "/dev/nbd1", 00:21:42.983 "bdev_name": "Nvme1n1p1" 00:21:42.983 }, 00:21:42.983 { 00:21:42.983 "nbd_device": "/dev/nbd10", 00:21:42.983 "bdev_name": "Nvme1n1p2" 00:21:42.983 }, 00:21:42.983 { 00:21:42.983 "nbd_device": "/dev/nbd11", 00:21:42.983 "bdev_name": "Nvme2n1" 00:21:42.983 }, 00:21:42.983 { 00:21:42.983 "nbd_device": "/dev/nbd12", 00:21:42.983 "bdev_name": "Nvme2n2" 00:21:42.983 }, 00:21:42.983 { 00:21:42.983 "nbd_device": "/dev/nbd13", 00:21:42.983 "bdev_name": "Nvme2n3" 00:21:42.983 }, 00:21:42.983 { 00:21:42.983 "nbd_device": "/dev/nbd14", 00:21:42.983 "bdev_name": "Nvme3n1" 00:21:42.983 } 00:21:42.983 ]' 00:21:42.983 23:03:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:21:42.983 23:03:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:21:42.983 { 00:21:42.983 "nbd_device": "/dev/nbd0", 00:21:42.983 "bdev_name": "Nvme0n1" 00:21:42.983 }, 00:21:42.984 { 00:21:42.984 "nbd_device": "/dev/nbd1", 00:21:42.984 "bdev_name": "Nvme1n1p1" 00:21:42.984 }, 00:21:42.984 { 00:21:42.984 "nbd_device": "/dev/nbd10", 00:21:42.984 "bdev_name": "Nvme1n1p2" 00:21:42.984 }, 00:21:42.984 { 00:21:42.984 "nbd_device": "/dev/nbd11", 00:21:42.984 "bdev_name": "Nvme2n1" 00:21:42.984 }, 00:21:42.984 { 00:21:42.984 "nbd_device": "/dev/nbd12", 00:21:42.984 "bdev_name": "Nvme2n2" 00:21:42.984 }, 00:21:42.984 { 00:21:42.984 "nbd_device": "/dev/nbd13", 00:21:42.984 "bdev_name": "Nvme2n3" 00:21:42.984 }, 00:21:42.984 { 00:21:42.984 "nbd_device": "/dev/nbd14", 00:21:42.984 "bdev_name": "Nvme3n1" 00:21:42.984 } 00:21:42.984 ]' 00:21:42.984 23:03:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:21:42.984 /dev/nbd1 00:21:42.984 /dev/nbd10 00:21:42.984 /dev/nbd11 00:21:42.984 /dev/nbd12 00:21:42.984 /dev/nbd13 00:21:42.984 /dev/nbd14' 00:21:42.984 23:03:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:21:42.984 /dev/nbd1 00:21:42.984 /dev/nbd10 00:21:42.984 /dev/nbd11 00:21:42.984 /dev/nbd12 00:21:42.984 /dev/nbd13 00:21:42.984 /dev/nbd14' 00:21:42.984 23:03:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:21:42.984 23:03:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:21:42.984 23:03:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:21:42.984 23:03:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:21:42.984 23:03:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:21:42.984 23:03:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:21:42.984 23:03:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:21:42.984 23:03:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:21:42.984 23:03:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:21:42.984 23:03:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:21:42.984 23:03:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:21:42.984 23:03:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:21:42.984 256+0 records in 00:21:42.984 256+0 records out 00:21:42.984 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00670486 s, 156 MB/s 00:21:42.984 23:03:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:21:42.984 23:03:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:21:43.244 256+0 records in 00:21:43.244 256+0 records out 00:21:43.244 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.273637 s, 3.8 MB/s 00:21:43.244 23:03:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:21:43.244 23:03:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:21:43.504 256+0 records in 00:21:43.504 256+0 records out 00:21:43.504 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.26724 s, 3.9 MB/s 00:21:43.504 23:03:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:21:43.504 23:03:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:21:43.763 256+0 records in 00:21:43.763 256+0 records out 00:21:43.763 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.289945 s, 3.6 MB/s 00:21:43.763 23:03:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:21:43.763 23:03:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:21:44.024 256+0 records in 00:21:44.024 256+0 records out 00:21:44.024 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.275499 s, 3.8 MB/s 00:21:44.024 23:03:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:21:44.024 23:03:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:21:44.285 256+0 records in 00:21:44.285 256+0 records out 00:21:44.285 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.245234 s, 4.3 MB/s 00:21:44.285 23:03:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:21:44.285 23:03:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:21:44.546 256+0 records in 00:21:44.546 256+0 records out 00:21:44.546 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.293736 s, 3.6 MB/s 00:21:44.546 23:03:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:21:44.546 23:03:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:21:44.808 256+0 records in 00:21:44.808 256+0 records out 00:21:44.808 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.21913 s, 4.8 MB/s 00:21:44.808 23:03:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:21:44.808 23:03:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:21:44.808 23:03:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:21:44.808 23:03:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:21:44.808 23:03:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:21:44.808 23:03:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:21:44.808 23:03:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:21:44.808 23:03:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:21:44.808 23:03:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:21:44.808 23:03:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:21:44.808 23:03:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:21:44.808 23:03:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:21:44.808 23:03:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:21:44.808 23:03:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:21:44.808 23:03:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:21:44.808 23:03:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:21:44.808 23:03:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:21:44.808 23:03:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:21:44.808 23:03:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:21:44.808 23:03:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:21:44.808 23:03:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:21:44.808 23:03:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:21:44.808 23:03:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:21:44.808 23:03:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:44.808 23:03:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:21:44.808 23:03:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:44.808 23:03:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:21:44.808 23:03:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:44.808 23:03:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:21:45.069 23:03:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:45.069 23:03:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:45.069 23:03:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:45.069 23:03:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:45.069 23:03:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:45.069 23:03:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:45.069 23:03:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:21:45.069 23:03:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:21:45.069 23:03:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:45.069 23:03:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:21:45.329 23:03:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:45.329 23:03:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:45.329 23:03:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:45.329 23:03:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:45.329 23:03:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:45.329 23:03:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:45.329 23:03:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:21:45.329 23:03:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:21:45.329 23:03:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:45.329 23:03:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:21:45.588 23:03:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:21:45.588 23:03:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:21:45.588 23:03:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:21:45.588 23:03:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:45.588 23:03:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:45.588 23:03:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:21:45.588 23:03:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:21:45.588 23:03:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:21:45.588 23:03:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:45.588 23:03:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:21:45.848 23:03:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:21:45.848 23:03:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:21:45.848 23:03:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:21:45.848 23:03:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:45.848 23:03:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:45.848 23:03:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:21:45.848 23:03:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:21:45.848 23:03:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:21:45.848 23:03:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:45.848 23:03:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:21:46.109 23:03:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:21:46.109 23:03:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:21:46.109 23:03:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:21:46.109 23:03:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:46.109 23:03:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:46.109 23:03:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:21:46.109 23:03:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:21:46.109 23:03:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:21:46.109 23:03:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:46.109 23:03:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:21:46.370 23:03:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:21:46.370 23:03:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:21:46.370 23:03:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:21:46.370 23:03:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:46.370 23:03:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:46.370 23:03:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:21:46.370 23:03:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:21:46.370 23:03:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:21:46.370 23:03:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:46.370 23:03:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:21:46.631 23:03:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:21:46.631 23:03:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:21:46.631 23:03:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:21:46.631 23:03:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:46.631 23:03:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:46.631 23:03:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:21:46.631 23:03:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:21:46.631 23:03:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:21:46.631 23:03:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:21:46.631 23:03:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:46.631 23:03:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:21:46.893 23:03:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:21:46.893 23:03:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:21:46.893 23:03:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:21:46.893 23:03:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:21:46.893 23:03:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:21:46.893 23:03:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:21:46.893 23:03:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:21:46.893 23:03:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:21:46.893 23:03:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:21:46.893 23:03:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:21:46.893 23:03:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:21:46.893 23:03:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:21:46.893 23:03:25 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:21:46.893 23:03:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:46.893 23:03:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:21:46.893 23:03:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:21:47.155 malloc_lvol_verify 00:21:47.155 23:03:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:21:47.416 e797f745-e532-4666-b971-647e78b6310e 00:21:47.416 23:03:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:21:47.677 c545805d-ce56-4670-95bb-e464c8985810 00:21:47.677 23:03:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:21:47.936 /dev/nbd0 00:21:47.936 23:03:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:21:47.936 23:03:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:21:47.936 23:03:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:21:47.936 23:03:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:21:47.936 23:03:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:21:47.936 mke2fs 1.47.0 (5-Feb-2023) 00:21:47.936 Discarding device blocks: 0/4096 done 00:21:47.936 Creating filesystem with 4096 1k blocks and 1024 inodes 00:21:47.936 00:21:47.936 Allocating group tables: 0/1 done 00:21:47.936 Writing inode tables: 0/1 done 00:21:47.936 Creating journal (1024 blocks): done 00:21:47.936 Writing superblocks and filesystem accounting information: 0/1 done 00:21:47.936 00:21:47.936 23:03:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:21:47.936 23:03:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:47.936 23:03:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:21:47.936 23:03:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:47.936 23:03:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:21:47.936 23:03:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:47.936 23:03:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:21:48.196 23:03:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:48.196 23:03:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:48.196 23:03:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:48.196 23:03:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:48.196 23:03:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:48.196 23:03:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:48.196 23:03:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:21:48.196 23:03:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:21:48.196 23:03:26 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 61743 00:21:48.196 23:03:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 61743 ']' 00:21:48.196 23:03:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 61743 00:21:48.196 23:03:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:21:48.196 23:03:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:48.196 23:03:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61743 00:21:48.196 killing process with pid 61743 00:21:48.196 23:03:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:48.196 23:03:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:48.196 23:03:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61743' 00:21:48.196 23:03:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@973 -- # kill 61743 00:21:48.196 23:03:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@978 -- # wait 61743 00:21:49.131 ************************************ 00:21:49.131 END TEST bdev_nbd 00:21:49.131 ************************************ 00:21:49.131 23:03:27 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:21:49.131 00:21:49.131 real 0m13.620s 00:21:49.131 user 0m18.118s 00:21:49.131 sys 0m4.587s 00:21:49.131 23:03:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:49.131 23:03:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:21:49.131 skipping fio tests on NVMe due to multi-ns failures. 00:21:49.131 23:03:27 blockdev_nvme_gpt -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:21:49.131 23:03:27 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = nvme ']' 00:21:49.131 23:03:27 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = gpt ']' 00:21:49.131 23:03:27 blockdev_nvme_gpt -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:21:49.131 23:03:27 blockdev_nvme_gpt -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:21:49.131 23:03:27 blockdev_nvme_gpt -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:21:49.131 23:03:27 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:21:49.131 23:03:27 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:49.131 23:03:27 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:21:49.131 ************************************ 00:21:49.131 START TEST bdev_verify 00:21:49.131 ************************************ 00:21:49.131 23:03:27 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:21:49.446 [2024-12-09 23:03:27.633505] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:21:49.446 [2024-12-09 23:03:27.633702] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62186 ] 00:21:49.446 [2024-12-09 23:03:27.801528] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:49.707 [2024-12-09 23:03:27.955277] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:49.707 [2024-12-09 23:03:27.955304] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:50.275 Running I/O for 5 seconds... 00:21:52.602 16512.00 IOPS, 64.50 MiB/s [2024-12-09T23:03:32.004Z] 16800.00 IOPS, 65.62 MiB/s [2024-12-09T23:03:32.949Z] 16661.33 IOPS, 65.08 MiB/s [2024-12-09T23:03:33.911Z] 17024.00 IOPS, 66.50 MiB/s [2024-12-09T23:03:33.911Z] 17292.80 IOPS, 67.55 MiB/s 00:21:55.449 Latency(us) 00:21:55.449 [2024-12-09T23:03:33.911Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:55.449 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:55.449 Verification LBA range: start 0x0 length 0xbd0bd 00:21:55.449 Nvme0n1 : 5.08 1210.01 4.73 0.00 0.00 105428.90 23693.78 144380.85 00:21:55.449 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:55.449 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:21:55.449 Nvme0n1 : 5.08 1223.00 4.78 0.00 0.00 104139.17 11141.12 108083.99 00:21:55.449 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:55.449 Verification LBA range: start 0x0 length 0x4ff80 00:21:55.449 Nvme1n1p1 : 5.08 1208.98 4.72 0.00 0.00 105184.62 25609.45 143574.25 00:21:55.449 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:55.449 Verification LBA range: start 0x4ff80 length 0x4ff80 00:21:55.449 Nvme1n1p1 : 5.09 1231.23 4.81 0.00 0.00 103377.24 13913.80 104857.60 00:21:55.449 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:55.449 Verification LBA range: start 0x0 length 0x4ff7f 00:21:55.449 Nvme1n1p2 : 5.08 1208.56 4.72 0.00 0.00 105017.50 26012.75 141961.06 00:21:55.449 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:55.449 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:21:55.449 Nvme1n1p2 : 5.10 1230.42 4.81 0.00 0.00 103202.66 15728.64 100824.62 00:21:55.449 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:55.449 Verification LBA range: start 0x0 length 0x80000 00:21:55.449 Nvme2n1 : 5.09 1208.16 4.72 0.00 0.00 104890.50 27625.94 133895.09 00:21:55.449 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:55.449 Verification LBA range: start 0x80000 length 0x80000 00:21:55.449 Nvme2n1 : 5.10 1229.74 4.80 0.00 0.00 103052.06 17341.83 98404.82 00:21:55.449 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:55.449 Verification LBA range: start 0x0 length 0x80000 00:21:55.449 Nvme2n2 : 5.09 1207.79 4.72 0.00 0.00 104734.01 26012.75 129862.10 00:21:55.449 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:55.449 Verification LBA range: start 0x80000 length 0x80000 00:21:55.449 Nvme2n2 : 5.10 1229.40 4.80 0.00 0.00 102894.41 17543.48 100018.02 00:21:55.449 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:55.449 Verification LBA range: start 0x0 length 0x80000 00:21:55.449 Nvme2n3 : 5.09 1207.39 4.72 0.00 0.00 104597.39 25105.33 137121.48 00:21:55.449 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:55.449 Verification LBA range: start 0x80000 length 0x80000 00:21:55.449 Nvme2n3 : 5.10 1229.05 4.80 0.00 0.00 102760.83 17140.18 106470.79 00:21:55.449 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:55.449 Verification LBA range: start 0x0 length 0x20000 00:21:55.449 Nvme3n1 : 5.10 1217.99 4.76 0.00 0.00 103634.81 2545.82 142767.66 00:21:55.449 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:55.449 Verification LBA range: start 0x20000 length 0x20000 00:21:55.449 Nvme3n1 : 5.10 1228.71 4.80 0.00 0.00 102627.57 17039.36 108890.58 00:21:55.449 [2024-12-09T23:03:33.911Z] =================================================================================================================== 00:21:55.449 [2024-12-09T23:03:33.911Z] Total : 17070.42 66.68 0.00 0.00 103958.47 2545.82 144380.85 00:21:57.387 00:21:57.387 real 0m7.848s 00:21:57.387 user 0m14.464s 00:21:57.387 sys 0m0.355s 00:21:57.387 ************************************ 00:21:57.387 23:03:35 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:57.387 23:03:35 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:21:57.387 END TEST bdev_verify 00:21:57.387 ************************************ 00:21:57.387 23:03:35 blockdev_nvme_gpt -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:21:57.387 23:03:35 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:21:57.387 23:03:35 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:57.387 23:03:35 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:21:57.387 ************************************ 00:21:57.387 START TEST bdev_verify_big_io 00:21:57.387 ************************************ 00:21:57.387 23:03:35 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:21:57.387 [2024-12-09 23:03:35.576335] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:21:57.387 [2024-12-09 23:03:35.576533] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62284 ] 00:21:57.387 [2024-12-09 23:03:35.748052] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:57.652 [2024-12-09 23:03:35.896288] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:57.652 [2024-12-09 23:03:35.896316] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:58.596 Running I/O for 5 seconds... 00:22:03.165 3224.00 IOPS, 201.50 MiB/s [2024-12-09T23:03:42.579Z] 4452.00 IOPS, 278.25 MiB/s [2024-12-09T23:03:42.579Z] 4443.00 IOPS, 277.69 MiB/s 00:22:04.117 Latency(us) 00:22:04.117 [2024-12-09T23:03:42.579Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:04.117 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:22:04.117 Verification LBA range: start 0x0 length 0xbd0b 00:22:04.117 Nvme0n1 : 5.56 172.56 10.78 0.00 0.00 727781.85 20366.57 825955.25 00:22:04.117 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:22:04.117 Verification LBA range: start 0xbd0b length 0xbd0b 00:22:04.117 Nvme0n1 : 5.56 179.79 11.24 0.00 0.00 699628.47 34885.32 774333.05 00:22:04.117 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:22:04.117 Verification LBA range: start 0x0 length 0x4ff8 00:22:04.117 Nvme1n1p1 : 5.56 171.98 10.75 0.00 0.00 720703.31 18047.61 806596.92 00:22:04.117 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:22:04.117 Verification LBA range: start 0x4ff8 length 0x4ff8 00:22:04.117 Nvme1n1p1 : 5.56 180.27 11.27 0.00 0.00 688318.78 41741.39 742069.17 00:22:04.117 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:22:04.117 Verification LBA range: start 0x0 length 0x4ff7 00:22:04.117 Nvme1n1p2 : 5.57 172.66 10.79 0.00 0.00 709311.61 20467.40 819502.47 00:22:04.117 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:22:04.117 Verification LBA range: start 0x4ff7 length 0x4ff7 00:22:04.117 Nvme1n1p2 : 5.57 179.74 11.23 0.00 0.00 680352.92 44161.18 742069.17 00:22:04.117 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:22:04.117 Verification LBA range: start 0x0 length 0x8000 00:22:04.117 Nvme2n1 : 5.57 172.09 10.76 0.00 0.00 702451.36 20669.05 838860.80 00:22:04.117 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:22:04.118 Verification LBA range: start 0x8000 length 0x8000 00:22:04.118 Nvme2n1 : 5.57 183.70 11.48 0.00 0.00 661418.14 25811.10 745295.56 00:22:04.118 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:22:04.118 Verification LBA range: start 0x0 length 0x8000 00:22:04.118 Nvme2n2 : 5.58 179.01 11.19 0.00 0.00 671113.85 9830.40 816276.09 00:22:04.118 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:22:04.118 Verification LBA range: start 0x8000 length 0x8000 00:22:04.118 Nvme2n2 : 5.58 183.49 11.47 0.00 0.00 652777.35 29844.09 745295.56 00:22:04.118 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:22:04.118 Verification LBA range: start 0x0 length 0x8000 00:22:04.118 Nvme2n3 : 5.59 178.86 11.18 0.00 0.00 662382.47 10637.00 813049.70 00:22:04.118 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:22:04.118 Verification LBA range: start 0x8000 length 0x8000 00:22:04.118 Nvme2n3 : 5.59 183.26 11.45 0.00 0.00 644444.26 33675.42 748521.94 00:22:04.118 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:22:04.118 Verification LBA range: start 0x0 length 0x2000 00:22:04.118 Nvme3n1 : 5.57 169.16 10.57 0.00 0.00 691988.60 11292.36 1090519.04 00:22:04.118 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:22:04.118 Verification LBA range: start 0x2000 length 0x2000 00:22:04.118 Nvme3n1 : 5.59 177.32 11.08 0.00 0.00 658637.40 5217.67 961463.53 00:22:04.118 [2024-12-09T23:03:42.580Z] =================================================================================================================== 00:22:04.118 [2024-12-09T23:03:42.580Z] Total : 2483.91 155.24 0.00 0.00 683115.04 5217.67 1090519.04 00:22:07.424 00:22:07.424 real 0m10.061s 00:22:07.424 user 0m18.752s 00:22:07.424 sys 0m0.446s 00:22:07.424 23:03:45 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:07.424 23:03:45 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:22:07.424 ************************************ 00:22:07.424 END TEST bdev_verify_big_io 00:22:07.424 ************************************ 00:22:07.424 23:03:45 blockdev_nvme_gpt -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:22:07.424 23:03:45 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:22:07.424 23:03:45 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:07.424 23:03:45 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:22:07.424 ************************************ 00:22:07.424 START TEST bdev_write_zeroes 00:22:07.424 ************************************ 00:22:07.424 23:03:45 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:22:07.424 [2024-12-09 23:03:45.693292] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:22:07.424 [2024-12-09 23:03:45.693472] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62399 ] 00:22:07.424 [2024-12-09 23:03:45.858466] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:07.684 [2024-12-09 23:03:46.006018] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:08.256 Running I/O for 1 seconds... 00:22:09.284 42098.00 IOPS, 164.45 MiB/s 00:22:09.284 Latency(us) 00:22:09.284 [2024-12-09T23:03:47.746Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:09.284 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:22:09.284 Nvme0n1 : 1.03 6012.05 23.48 0.00 0.00 21231.55 7259.37 42144.69 00:22:09.284 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:22:09.284 Nvme1n1p1 : 1.03 6017.77 23.51 0.00 0.00 21180.98 14518.74 33070.47 00:22:09.284 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:22:09.284 Nvme1n1p2 : 1.03 6009.66 23.48 0.00 0.00 21062.83 14115.45 29037.49 00:22:09.284 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:22:09.284 Nvme2n1 : 1.03 6002.54 23.45 0.00 0.00 21030.27 14115.45 28432.54 00:22:09.284 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:22:09.284 Nvme2n2 : 1.04 5995.53 23.42 0.00 0.00 21008.12 14216.27 29642.44 00:22:09.284 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:22:09.284 Nvme2n3 : 1.04 5988.41 23.39 0.00 0.00 20967.02 13208.02 30045.74 00:22:09.284 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:22:09.284 Nvme3n1 : 1.04 5981.37 23.36 0.00 0.00 20938.36 11897.30 29037.49 00:22:09.284 [2024-12-09T23:03:47.746Z] =================================================================================================================== 00:22:09.284 [2024-12-09T23:03:47.746Z] Total : 42007.32 164.09 0.00 0.00 21059.82 7259.37 42144.69 00:22:10.227 00:22:10.227 real 0m2.954s 00:22:10.227 user 0m2.532s 00:22:10.227 sys 0m0.290s 00:22:10.227 23:03:48 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:10.227 23:03:48 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:22:10.227 ************************************ 00:22:10.227 END TEST bdev_write_zeroes 00:22:10.227 ************************************ 00:22:10.227 23:03:48 blockdev_nvme_gpt -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:22:10.227 23:03:48 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:22:10.227 23:03:48 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:10.227 23:03:48 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:22:10.227 ************************************ 00:22:10.227 START TEST bdev_json_nonenclosed 00:22:10.227 ************************************ 00:22:10.227 23:03:48 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:22:10.488 [2024-12-09 23:03:48.709270] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:22:10.488 [2024-12-09 23:03:48.709446] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62452 ] 00:22:10.488 [2024-12-09 23:03:48.876845] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:10.749 [2024-12-09 23:03:49.017325] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:10.749 [2024-12-09 23:03:49.017473] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:22:10.749 [2024-12-09 23:03:49.017500] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:22:10.749 [2024-12-09 23:03:49.017511] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:22:11.013 00:22:11.013 real 0m0.596s 00:22:11.013 user 0m0.365s 00:22:11.013 sys 0m0.122s 00:22:11.013 23:03:49 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:11.013 ************************************ 00:22:11.013 END TEST bdev_json_nonenclosed 00:22:11.013 ************************************ 00:22:11.013 23:03:49 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:22:11.013 23:03:49 blockdev_nvme_gpt -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:22:11.013 23:03:49 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:22:11.013 23:03:49 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:11.013 23:03:49 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:22:11.013 ************************************ 00:22:11.013 START TEST bdev_json_nonarray 00:22:11.013 ************************************ 00:22:11.013 23:03:49 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:22:11.013 [2024-12-09 23:03:49.369318] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:22:11.013 [2024-12-09 23:03:49.369499] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62477 ] 00:22:11.274 [2024-12-09 23:03:49.533886] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:11.274 [2024-12-09 23:03:49.675877] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:11.274 [2024-12-09 23:03:49.676049] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:22:11.274 [2024-12-09 23:03:49.676077] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:22:11.274 [2024-12-09 23:03:49.676092] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:22:11.561 00:22:11.561 real 0m0.596s 00:22:11.561 user 0m0.363s 00:22:11.561 sys 0m0.126s 00:22:11.561 ************************************ 00:22:11.561 END TEST bdev_json_nonarray 00:22:11.561 ************************************ 00:22:11.561 23:03:49 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:11.561 23:03:49 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:22:11.561 23:03:49 blockdev_nvme_gpt -- bdev/blockdev.sh@824 -- # [[ gpt == bdev ]] 00:22:11.561 23:03:49 blockdev_nvme_gpt -- bdev/blockdev.sh@832 -- # [[ gpt == gpt ]] 00:22:11.561 23:03:49 blockdev_nvme_gpt -- bdev/blockdev.sh@833 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:22:11.561 23:03:49 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:11.561 23:03:49 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:11.561 23:03:49 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:22:11.561 ************************************ 00:22:11.561 START TEST bdev_gpt_uuid 00:22:11.561 ************************************ 00:22:11.561 23:03:49 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1129 -- # bdev_gpt_uuid 00:22:11.561 23:03:49 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@651 -- # local bdev 00:22:11.561 23:03:49 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@653 -- # start_spdk_tgt 00:22:11.561 23:03:49 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=62503 00:22:11.561 23:03:49 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:22:11.561 23:03:49 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 62503 00:22:11.561 23:03:49 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # '[' -z 62503 ']' 00:22:11.561 23:03:49 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:11.561 23:03:49 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:11.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:11.561 23:03:49 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:22:11.561 23:03:49 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:11.561 23:03:49 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:11.561 23:03:49 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:22:11.825 [2024-12-09 23:03:50.044890] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:22:11.825 [2024-12-09 23:03:50.045058] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62503 ] 00:22:11.825 [2024-12-09 23:03:50.212334] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:12.088 [2024-12-09 23:03:50.344179] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:12.660 23:03:51 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:12.660 23:03:51 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@868 -- # return 0 00:22:12.660 23:03:51 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@655 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:22:12.660 23:03:51 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.660 23:03:51 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:22:13.233 Some configs were skipped because the RPC state that can call them passed over. 00:22:13.233 23:03:51 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.233 23:03:51 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@656 -- # rpc_cmd bdev_wait_for_examine 00:22:13.233 23:03:51 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.233 23:03:51 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:22:13.233 23:03:51 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.233 23:03:51 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:22:13.234 23:03:51 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.234 23:03:51 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:22:13.234 23:03:51 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.234 23:03:51 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # bdev='[ 00:22:13.234 { 00:22:13.234 "name": "Nvme1n1p1", 00:22:13.234 "aliases": [ 00:22:13.234 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:22:13.234 ], 00:22:13.234 "product_name": "GPT Disk", 00:22:13.234 "block_size": 4096, 00:22:13.234 "num_blocks": 655104, 00:22:13.234 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:22:13.234 "assigned_rate_limits": { 00:22:13.234 "rw_ios_per_sec": 0, 00:22:13.234 "rw_mbytes_per_sec": 0, 00:22:13.234 "r_mbytes_per_sec": 0, 00:22:13.234 "w_mbytes_per_sec": 0 00:22:13.234 }, 00:22:13.234 "claimed": false, 00:22:13.234 "zoned": false, 00:22:13.234 "supported_io_types": { 00:22:13.234 "read": true, 00:22:13.234 "write": true, 00:22:13.234 "unmap": true, 00:22:13.234 "flush": true, 00:22:13.234 "reset": true, 00:22:13.234 "nvme_admin": false, 00:22:13.234 "nvme_io": false, 00:22:13.234 "nvme_io_md": false, 00:22:13.234 "write_zeroes": true, 00:22:13.234 "zcopy": false, 00:22:13.234 "get_zone_info": false, 00:22:13.234 "zone_management": false, 00:22:13.234 "zone_append": false, 00:22:13.234 "compare": true, 00:22:13.234 "compare_and_write": false, 00:22:13.234 "abort": true, 00:22:13.234 "seek_hole": false, 00:22:13.234 "seek_data": false, 00:22:13.234 "copy": true, 00:22:13.234 "nvme_iov_md": false 00:22:13.234 }, 00:22:13.234 "driver_specific": { 00:22:13.234 "gpt": { 00:22:13.234 "base_bdev": "Nvme1n1", 00:22:13.234 "offset_blocks": 256, 00:22:13.234 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:22:13.234 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:22:13.234 "partition_name": "SPDK_TEST_first" 00:22:13.234 } 00:22:13.234 } 00:22:13.234 } 00:22:13.234 ]' 00:22:13.234 23:03:51 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # jq -r length 00:22:13.234 23:03:51 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # [[ 1 == \1 ]] 00:22:13.234 23:03:51 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # jq -r '.[0].aliases[0]' 00:22:13.234 23:03:51 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:22:13.234 23:03:51 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:22:13.234 23:03:51 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:22:13.234 23:03:51 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:22:13.234 23:03:51 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.234 23:03:51 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:22:13.234 23:03:51 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.234 23:03:51 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # bdev='[ 00:22:13.234 { 00:22:13.234 "name": "Nvme1n1p2", 00:22:13.234 "aliases": [ 00:22:13.234 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:22:13.234 ], 00:22:13.234 "product_name": "GPT Disk", 00:22:13.234 "block_size": 4096, 00:22:13.234 "num_blocks": 655103, 00:22:13.234 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:22:13.234 "assigned_rate_limits": { 00:22:13.234 "rw_ios_per_sec": 0, 00:22:13.234 "rw_mbytes_per_sec": 0, 00:22:13.234 "r_mbytes_per_sec": 0, 00:22:13.234 "w_mbytes_per_sec": 0 00:22:13.234 }, 00:22:13.234 "claimed": false, 00:22:13.234 "zoned": false, 00:22:13.234 "supported_io_types": { 00:22:13.234 "read": true, 00:22:13.234 "write": true, 00:22:13.234 "unmap": true, 00:22:13.234 "flush": true, 00:22:13.234 "reset": true, 00:22:13.234 "nvme_admin": false, 00:22:13.234 "nvme_io": false, 00:22:13.234 "nvme_io_md": false, 00:22:13.234 "write_zeroes": true, 00:22:13.234 "zcopy": false, 00:22:13.234 "get_zone_info": false, 00:22:13.234 "zone_management": false, 00:22:13.234 "zone_append": false, 00:22:13.234 "compare": true, 00:22:13.234 "compare_and_write": false, 00:22:13.234 "abort": true, 00:22:13.234 "seek_hole": false, 00:22:13.234 "seek_data": false, 00:22:13.234 "copy": true, 00:22:13.234 "nvme_iov_md": false 00:22:13.234 }, 00:22:13.234 "driver_specific": { 00:22:13.234 "gpt": { 00:22:13.234 "base_bdev": "Nvme1n1", 00:22:13.234 "offset_blocks": 655360, 00:22:13.234 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:22:13.234 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:22:13.234 "partition_name": "SPDK_TEST_second" 00:22:13.234 } 00:22:13.234 } 00:22:13.234 } 00:22:13.234 ]' 00:22:13.234 23:03:51 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # jq -r length 00:22:13.234 23:03:51 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # [[ 1 == \1 ]] 00:22:13.234 23:03:51 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # jq -r '.[0].aliases[0]' 00:22:13.234 23:03:51 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:22:13.234 23:03:51 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:22:13.234 23:03:51 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:22:13.234 23:03:51 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@668 -- # killprocess 62503 00:22:13.234 23:03:51 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # '[' -z 62503 ']' 00:22:13.234 23:03:51 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # kill -0 62503 00:22:13.234 23:03:51 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # uname 00:22:13.234 23:03:51 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:13.234 23:03:51 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62503 00:22:13.234 killing process with pid 62503 00:22:13.234 23:03:51 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:13.234 23:03:51 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:13.234 23:03:51 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62503' 00:22:13.234 23:03:51 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@973 -- # kill 62503 00:22:13.234 23:03:51 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@978 -- # wait 62503 00:22:15.151 00:22:15.151 real 0m3.364s 00:22:15.151 user 0m3.407s 00:22:15.151 sys 0m0.495s 00:22:15.151 23:03:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:15.151 23:03:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:22:15.151 ************************************ 00:22:15.151 END TEST bdev_gpt_uuid 00:22:15.151 ************************************ 00:22:15.151 23:03:53 blockdev_nvme_gpt -- bdev/blockdev.sh@836 -- # [[ gpt == crypto_sw ]] 00:22:15.151 23:03:53 blockdev_nvme_gpt -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:22:15.151 23:03:53 blockdev_nvme_gpt -- bdev/blockdev.sh@849 -- # cleanup 00:22:15.151 23:03:53 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:22:15.151 23:03:53 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:22:15.151 23:03:53 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:22:15.151 23:03:53 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:22:15.151 23:03:53 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:22:15.151 23:03:53 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:22:15.412 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:15.675 Waiting for block devices as requested 00:22:15.675 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:22:15.675 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:22:15.675 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:22:15.940 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:22:21.233 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:22:21.233 23:03:59 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:22:21.233 23:03:59 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:22:21.233 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:22:21.233 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:22:21.233 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:22:21.233 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:22:21.233 23:03:59 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:22:21.233 ************************************ 00:22:21.233 END TEST blockdev_nvme_gpt 00:22:21.233 ************************************ 00:22:21.233 00:22:21.233 real 1m3.780s 00:22:21.233 user 1m20.679s 00:22:21.233 sys 0m10.234s 00:22:21.233 23:03:59 blockdev_nvme_gpt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:21.233 23:03:59 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:22:21.233 23:03:59 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:22:21.233 23:03:59 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:21.233 23:03:59 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:21.233 23:03:59 -- common/autotest_common.sh@10 -- # set +x 00:22:21.233 ************************************ 00:22:21.233 START TEST nvme 00:22:21.233 ************************************ 00:22:21.233 23:03:59 nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:22:21.549 * Looking for test storage... 00:22:21.549 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:22:21.549 23:03:59 nvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:21.549 23:03:59 nvme -- common/autotest_common.sh@1711 -- # lcov --version 00:22:21.549 23:03:59 nvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:21.549 23:03:59 nvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:21.549 23:03:59 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:21.549 23:03:59 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:21.549 23:03:59 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:21.549 23:03:59 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:22:21.549 23:03:59 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:22:21.549 23:03:59 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:22:21.549 23:03:59 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:22:21.549 23:03:59 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:22:21.549 23:03:59 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:22:21.549 23:03:59 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:22:21.549 23:03:59 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:21.549 23:03:59 nvme -- scripts/common.sh@344 -- # case "$op" in 00:22:21.549 23:03:59 nvme -- scripts/common.sh@345 -- # : 1 00:22:21.549 23:03:59 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:21.549 23:03:59 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:21.549 23:03:59 nvme -- scripts/common.sh@365 -- # decimal 1 00:22:21.549 23:03:59 nvme -- scripts/common.sh@353 -- # local d=1 00:22:21.549 23:03:59 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:21.549 23:03:59 nvme -- scripts/common.sh@355 -- # echo 1 00:22:21.549 23:03:59 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:22:21.549 23:03:59 nvme -- scripts/common.sh@366 -- # decimal 2 00:22:21.549 23:03:59 nvme -- scripts/common.sh@353 -- # local d=2 00:22:21.549 23:03:59 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:21.549 23:03:59 nvme -- scripts/common.sh@355 -- # echo 2 00:22:21.549 23:03:59 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:22:21.549 23:03:59 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:21.549 23:03:59 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:21.549 23:03:59 nvme -- scripts/common.sh@368 -- # return 0 00:22:21.549 23:03:59 nvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:21.549 23:03:59 nvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:21.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:21.549 --rc genhtml_branch_coverage=1 00:22:21.549 --rc genhtml_function_coverage=1 00:22:21.549 --rc genhtml_legend=1 00:22:21.549 --rc geninfo_all_blocks=1 00:22:21.549 --rc geninfo_unexecuted_blocks=1 00:22:21.549 00:22:21.549 ' 00:22:21.549 23:03:59 nvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:21.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:21.549 --rc genhtml_branch_coverage=1 00:22:21.549 --rc genhtml_function_coverage=1 00:22:21.549 --rc genhtml_legend=1 00:22:21.549 --rc geninfo_all_blocks=1 00:22:21.549 --rc geninfo_unexecuted_blocks=1 00:22:21.549 00:22:21.549 ' 00:22:21.549 23:03:59 nvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:21.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:21.549 --rc genhtml_branch_coverage=1 00:22:21.549 --rc genhtml_function_coverage=1 00:22:21.549 --rc genhtml_legend=1 00:22:21.549 --rc geninfo_all_blocks=1 00:22:21.549 --rc geninfo_unexecuted_blocks=1 00:22:21.549 00:22:21.549 ' 00:22:21.549 23:03:59 nvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:21.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:21.549 --rc genhtml_branch_coverage=1 00:22:21.549 --rc genhtml_function_coverage=1 00:22:21.549 --rc genhtml_legend=1 00:22:21.549 --rc geninfo_all_blocks=1 00:22:21.549 --rc geninfo_unexecuted_blocks=1 00:22:21.549 00:22:21.549 ' 00:22:21.549 23:03:59 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:22:22.123 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:22.694 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:22:22.694 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:22:22.694 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:22:22.694 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:22:22.694 23:04:01 nvme -- nvme/nvme.sh@79 -- # uname 00:22:22.694 23:04:01 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:22:22.694 23:04:01 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:22:22.694 23:04:01 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:22:22.694 23:04:01 nvme -- common/autotest_common.sh@1086 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:22:22.694 23:04:01 nvme -- common/autotest_common.sh@1072 -- # _randomize_va_space=2 00:22:22.694 23:04:01 nvme -- common/autotest_common.sh@1073 -- # echo 0 00:22:22.694 Waiting for stub to ready for secondary processes... 00:22:22.694 23:04:01 nvme -- common/autotest_common.sh@1075 -- # stubpid=63152 00:22:22.694 23:04:01 nvme -- common/autotest_common.sh@1076 -- # echo Waiting for stub to ready for secondary processes... 00:22:22.694 23:04:01 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:22:22.694 23:04:01 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/63152 ]] 00:22:22.694 23:04:01 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:22:22.694 23:04:01 nvme -- common/autotest_common.sh@1074 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:22:22.694 [2024-12-09 23:04:01.078159] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:22:22.694 [2024-12-09 23:04:01.078353] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:22:23.642 23:04:02 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:22:23.643 23:04:02 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/63152 ]] 00:22:23.643 23:04:02 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:22:24.268 [2024-12-09 23:04:02.444533] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:24.268 [2024-12-09 23:04:02.572773] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:24.268 [2024-12-09 23:04:02.573035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:24.268 [2024-12-09 23:04:02.573157] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:24.268 [2024-12-09 23:04:02.590817] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:22:24.268 [2024-12-09 23:04:02.590889] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:22:24.268 [2024-12-09 23:04:02.605052] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:22:24.268 [2024-12-09 23:04:02.605213] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:22:24.268 [2024-12-09 23:04:02.608836] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:22:24.268 [2024-12-09 23:04:02.609669] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:22:24.268 [2024-12-09 23:04:02.609803] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:22:24.268 [2024-12-09 23:04:02.614143] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:22:24.268 [2024-12-09 23:04:02.614562] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:22:24.268 [2024-12-09 23:04:02.614686] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:22:24.268 [2024-12-09 23:04:02.618824] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:22:24.268 [2024-12-09 23:04:02.619073] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:22:24.268 [2024-12-09 23:04:02.619191] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:22:24.268 [2024-12-09 23:04:02.619256] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:22:24.268 [2024-12-09 23:04:02.619308] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:22:24.841 done. 00:22:24.841 23:04:03 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:22:24.841 23:04:03 nvme -- common/autotest_common.sh@1082 -- # echo done. 00:22:24.841 23:04:03 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:22:24.841 23:04:03 nvme -- common/autotest_common.sh@1105 -- # '[' 10 -le 1 ']' 00:22:24.841 23:04:03 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:24.841 23:04:03 nvme -- common/autotest_common.sh@10 -- # set +x 00:22:24.841 ************************************ 00:22:24.841 START TEST nvme_reset 00:22:24.841 ************************************ 00:22:24.841 23:04:03 nvme.nvme_reset -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:22:24.841 Initializing NVMe Controllers 00:22:24.841 Skipping QEMU NVMe SSD at 0000:00:11.0 00:22:24.841 Skipping QEMU NVMe SSD at 0000:00:13.0 00:22:24.841 Skipping QEMU NVMe SSD at 0000:00:10.0 00:22:24.841 Skipping QEMU NVMe SSD at 0000:00:12.0 00:22:24.841 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:22:25.105 00:22:25.105 real 0m0.248s 00:22:25.105 user 0m0.081s 00:22:25.105 sys 0m0.122s 00:22:25.105 23:04:03 nvme.nvme_reset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:25.105 23:04:03 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:22:25.105 ************************************ 00:22:25.105 END TEST nvme_reset 00:22:25.105 ************************************ 00:22:25.105 23:04:03 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:22:25.105 23:04:03 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:25.105 23:04:03 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:25.105 23:04:03 nvme -- common/autotest_common.sh@10 -- # set +x 00:22:25.105 ************************************ 00:22:25.105 START TEST nvme_identify 00:22:25.105 ************************************ 00:22:25.105 23:04:03 nvme.nvme_identify -- common/autotest_common.sh@1129 -- # nvme_identify 00:22:25.105 23:04:03 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:22:25.105 23:04:03 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:22:25.105 23:04:03 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:22:25.105 23:04:03 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:22:25.105 23:04:03 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # bdfs=() 00:22:25.105 23:04:03 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # local bdfs 00:22:25.105 23:04:03 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:22:25.105 23:04:03 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:22:25.105 23:04:03 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:22:25.105 23:04:03 nvme.nvme_identify -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:22:25.105 23:04:03 nvme.nvme_identify -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:22:25.105 23:04:03 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:22:25.369 [2024-12-09 23:04:03.640989] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 63185 terminated unexpected 00:22:25.369 ===================================================== 00:22:25.369 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:22:25.369 ===================================================== 00:22:25.369 Controller Capabilities/Features 00:22:25.369 ================================ 00:22:25.369 Vendor ID: 1b36 00:22:25.369 Subsystem Vendor ID: 1af4 00:22:25.369 Serial Number: 12341 00:22:25.369 Model Number: QEMU NVMe Ctrl 00:22:25.369 Firmware Version: 8.0.0 00:22:25.369 Recommended Arb Burst: 6 00:22:25.369 IEEE OUI Identifier: 00 54 52 00:22:25.369 Multi-path I/O 00:22:25.369 May have multiple subsystem ports: No 00:22:25.369 May have multiple controllers: No 00:22:25.369 Associated with SR-IOV VF: No 00:22:25.369 Max Data Transfer Size: 524288 00:22:25.369 Max Number of Namespaces: 256 00:22:25.369 Max Number of I/O Queues: 64 00:22:25.369 NVMe Specification Version (VS): 1.4 00:22:25.369 NVMe Specification Version (Identify): 1.4 00:22:25.369 Maximum Queue Entries: 2048 00:22:25.369 Contiguous Queues Required: Yes 00:22:25.369 Arbitration Mechanisms Supported 00:22:25.369 Weighted Round Robin: Not Supported 00:22:25.369 Vendor Specific: Not Supported 00:22:25.369 Reset Timeout: 7500 ms 00:22:25.369 Doorbell Stride: 4 bytes 00:22:25.369 NVM Subsystem Reset: Not Supported 00:22:25.369 Command Sets Supported 00:22:25.369 NVM Command Set: Supported 00:22:25.369 Boot Partition: Not Supported 00:22:25.369 Memory Page Size Minimum: 4096 bytes 00:22:25.369 Memory Page Size Maximum: 65536 bytes 00:22:25.369 Persistent Memory Region: Not Supported 00:22:25.369 Optional Asynchronous Events Supported 00:22:25.369 Namespace Attribute Notices: Supported 00:22:25.369 Firmware Activation Notices: Not Supported 00:22:25.369 ANA Change Notices: Not Supported 00:22:25.369 PLE Aggregate Log Change Notices: Not Supported 00:22:25.369 LBA Status Info Alert Notices: Not Supported 00:22:25.369 EGE Aggregate Log Change Notices: Not Supported 00:22:25.369 Normal NVM Subsystem Shutdown event: Not Supported 00:22:25.369 Zone Descriptor Change Notices: Not Supported 00:22:25.369 Discovery Log Change Notices: Not Supported 00:22:25.369 Controller Attributes 00:22:25.369 128-bit Host Identifier: Not Supported 00:22:25.369 Non-Operational Permissive Mode: Not Supported 00:22:25.369 NVM Sets: Not Supported 00:22:25.369 Read Recovery Levels: Not Supported 00:22:25.369 Endurance Groups: Not Supported 00:22:25.369 Predictable Latency Mode: Not Supported 00:22:25.369 Traffic Based Keep ALive: Not Supported 00:22:25.369 Namespace Granularity: Not Supported 00:22:25.369 SQ Associations: Not Supported 00:22:25.369 UUID List: Not Supported 00:22:25.369 Multi-Domain Subsystem: Not Supported 00:22:25.369 Fixed Capacity Management: Not Supported 00:22:25.369 Variable Capacity Management: Not Supported 00:22:25.369 Delete Endurance Group: Not Supported 00:22:25.369 Delete NVM Set: Not Supported 00:22:25.369 Extended LBA Formats Supported: Supported 00:22:25.369 Flexible Data Placement Supported: Not Supported 00:22:25.370 00:22:25.370 Controller Memory Buffer Support 00:22:25.370 ================================ 00:22:25.370 Supported: No 00:22:25.370 00:22:25.370 Persistent Memory Region Support 00:22:25.370 ================================ 00:22:25.370 Supported: No 00:22:25.370 00:22:25.370 Admin Command Set Attributes 00:22:25.370 ============================ 00:22:25.370 Security Send/Receive: Not Supported 00:22:25.370 Format NVM: Supported 00:22:25.370 Firmware Activate/Download: Not Supported 00:22:25.370 Namespace Management: Supported 00:22:25.370 Device Self-Test: Not Supported 00:22:25.370 Directives: Supported 00:22:25.370 NVMe-MI: Not Supported 00:22:25.370 Virtualization Management: Not Supported 00:22:25.370 Doorbell Buffer Config: Supported 00:22:25.370 Get LBA Status Capability: Not Supported 00:22:25.370 Command & Feature Lockdown Capability: Not Supported 00:22:25.370 Abort Command Limit: 4 00:22:25.370 Async Event Request Limit: 4 00:22:25.370 Number of Firmware Slots: N/A 00:22:25.370 Firmware Slot 1 Read-Only: N/A 00:22:25.370 Firmware Activation Without Reset: N/A 00:22:25.370 Multiple Update Detection Support: N/A 00:22:25.370 Firmware Update Granularity: No Information Provided 00:22:25.370 Per-Namespace SMART Log: Yes 00:22:25.370 Asymmetric Namespace Access Log Page: Not Supported 00:22:25.370 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:22:25.370 Command Effects Log Page: Supported 00:22:25.370 Get Log Page Extended Data: Supported 00:22:25.370 Telemetry Log Pages: Not Supported 00:22:25.370 Persistent Event Log Pages: Not Supported 00:22:25.370 Supported Log Pages Log Page: May Support 00:22:25.370 Commands Supported & Effects Log Page: Not Supported 00:22:25.370 Feature Identifiers & Effects Log Page:May Support 00:22:25.370 NVMe-MI Commands & Effects Log Page: May Support 00:22:25.370 Data Area 4 for Telemetry Log: Not Supported 00:22:25.370 Error Log Page Entries Supported: 1 00:22:25.370 Keep Alive: Not Supported 00:22:25.370 00:22:25.370 NVM Command Set Attributes 00:22:25.370 ========================== 00:22:25.370 Submission Queue Entry Size 00:22:25.370 Max: 64 00:22:25.370 Min: 64 00:22:25.370 Completion Queue Entry Size 00:22:25.370 Max: 16 00:22:25.370 Min: 16 00:22:25.370 Number of Namespaces: 256 00:22:25.370 Compare Command: Supported 00:22:25.370 Write Uncorrectable Command: Not Supported 00:22:25.370 Dataset Management Command: Supported 00:22:25.370 Write Zeroes Command: Supported 00:22:25.370 Set Features Save Field: Supported 00:22:25.370 Reservations: Not Supported 00:22:25.370 Timestamp: Supported 00:22:25.370 Copy: Supported 00:22:25.370 Volatile Write Cache: Present 00:22:25.370 Atomic Write Unit (Normal): 1 00:22:25.370 Atomic Write Unit (PFail): 1 00:22:25.370 Atomic Compare & Write Unit: 1 00:22:25.370 Fused Compare & Write: Not Supported 00:22:25.370 Scatter-Gather List 00:22:25.370 SGL Command Set: Supported 00:22:25.370 SGL Keyed: Not Supported 00:22:25.370 SGL Bit Bucket Descriptor: Not Supported 00:22:25.370 SGL Metadata Pointer: Not Supported 00:22:25.370 Oversized SGL: Not Supported 00:22:25.370 SGL Metadata Address: Not Supported 00:22:25.370 SGL Offset: Not Supported 00:22:25.370 Transport SGL Data Block: Not Supported 00:22:25.370 Replay Protected Memory Block: Not Supported 00:22:25.370 00:22:25.370 Firmware Slot Information 00:22:25.370 ========================= 00:22:25.370 Active slot: 1 00:22:25.370 Slot 1 Firmware Revision: 1.0 00:22:25.370 00:22:25.370 00:22:25.370 Commands Supported and Effects 00:22:25.370 ============================== 00:22:25.370 Admin Commands 00:22:25.370 -------------- 00:22:25.370 Delete I/O Submission Queue (00h): Supported 00:22:25.370 Create I/O Submission Queue (01h): Supported 00:22:25.370 Get Log Page (02h): Supported 00:22:25.370 Delete I/O Completion Queue (04h): Supported 00:22:25.370 Create I/O Completion Queue (05h): Supported 00:22:25.370 Identify (06h): Supported 00:22:25.370 Abort (08h): Supported 00:22:25.370 Set Features (09h): Supported 00:22:25.370 Get Features (0Ah): Supported 00:22:25.370 Asynchronous Event Request (0Ch): Supported 00:22:25.370 Namespace Attachment (15h): Supported NS-Inventory-Change 00:22:25.370 Directive Send (19h): Supported 00:22:25.370 Directive Receive (1Ah): Supported 00:22:25.370 Virtualization Management (1Ch): Supported 00:22:25.370 Doorbell Buffer Config (7Ch): Supported 00:22:25.370 Format NVM (80h): Supported LBA-Change 00:22:25.370 I/O Commands 00:22:25.370 ------------ 00:22:25.370 Flush (00h): Supported LBA-Change 00:22:25.370 Write (01h): Supported LBA-Change 00:22:25.370 Read (02h): Supported 00:22:25.370 Compare (05h): Supported 00:22:25.370 Write Zeroes (08h): Supported LBA-Change 00:22:25.370 Dataset Management (09h): Supported LBA-Change 00:22:25.370 Unknown (0Ch): Supported 00:22:25.370 Unknown (12h): Supported 00:22:25.370 Copy (19h): Supported LBA-Change 00:22:25.370 Unknown (1Dh): Supported LBA-Change 00:22:25.370 00:22:25.370 Error Log 00:22:25.370 ========= 00:22:25.370 00:22:25.370 Arbitration 00:22:25.370 =========== 00:22:25.370 Arbitration Burst: no limit 00:22:25.370 00:22:25.370 Power Management 00:22:25.370 ================ 00:22:25.370 Number of Power States: 1 00:22:25.370 Current Power State: Power State #0 00:22:25.370 Power State #0: 00:22:25.370 Max Power: 25.00 W 00:22:25.370 Non-Operational State: Operational 00:22:25.370 Entry Latency: 16 microseconds 00:22:25.370 Exit Latency: 4 microseconds 00:22:25.370 Relative Read Throughput: 0 00:22:25.370 Relative Read Latency: 0 00:22:25.370 Relative Write Throughput: 0 00:22:25.370 Relative Write Latency: 0 00:22:25.370 Idle Power[2024-12-09 23:04:03.643724] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 63185 terminated unexpected 00:22:25.370 : Not Reported 00:22:25.370 Active Power: Not Reported 00:22:25.370 Non-Operational Permissive Mode: Not Supported 00:22:25.370 00:22:25.370 Health Information 00:22:25.370 ================== 00:22:25.370 Critical Warnings: 00:22:25.370 Available Spare Space: OK 00:22:25.370 Temperature: OK 00:22:25.370 Device Reliability: OK 00:22:25.370 Read Only: No 00:22:25.370 Volatile Memory Backup: OK 00:22:25.370 Current Temperature: 323 Kelvin (50 Celsius) 00:22:25.370 Temperature Threshold: 343 Kelvin (70 Celsius) 00:22:25.370 Available Spare: 0% 00:22:25.370 Available Spare Threshold: 0% 00:22:25.370 Life Percentage Used: 0% 00:22:25.370 Data Units Read: 1142 00:22:25.370 Data Units Written: 1015 00:22:25.370 Host Read Commands: 44948 00:22:25.370 Host Write Commands: 43851 00:22:25.370 Controller Busy Time: 0 minutes 00:22:25.370 Power Cycles: 0 00:22:25.370 Power On Hours: 0 hours 00:22:25.370 Unsafe Shutdowns: 0 00:22:25.370 Unrecoverable Media Errors: 0 00:22:25.370 Lifetime Error Log Entries: 0 00:22:25.370 Warning Temperature Time: 0 minutes 00:22:25.370 Critical Temperature Time: 0 minutes 00:22:25.370 00:22:25.370 Number of Queues 00:22:25.370 ================ 00:22:25.370 Number of I/O Submission Queues: 64 00:22:25.370 Number of I/O Completion Queues: 64 00:22:25.370 00:22:25.370 ZNS Specific Controller Data 00:22:25.370 ============================ 00:22:25.370 Zone Append Size Limit: 0 00:22:25.370 00:22:25.370 00:22:25.370 Active Namespaces 00:22:25.370 ================= 00:22:25.370 Namespace ID:1 00:22:25.370 Error Recovery Timeout: Unlimited 00:22:25.370 Command Set Identifier: NVM (00h) 00:22:25.370 Deallocate: Supported 00:22:25.370 Deallocated/Unwritten Error: Supported 00:22:25.370 Deallocated Read Value: All 0x00 00:22:25.370 Deallocate in Write Zeroes: Not Supported 00:22:25.370 Deallocated Guard Field: 0xFFFF 00:22:25.370 Flush: Supported 00:22:25.370 Reservation: Not Supported 00:22:25.370 Namespace Sharing Capabilities: Private 00:22:25.370 Size (in LBAs): 1310720 (5GiB) 00:22:25.370 Capacity (in LBAs): 1310720 (5GiB) 00:22:25.370 Utilization (in LBAs): 1310720 (5GiB) 00:22:25.370 Thin Provisioning: Not Supported 00:22:25.370 Per-NS Atomic Units: No 00:22:25.370 Maximum Single Source Range Length: 128 00:22:25.370 Maximum Copy Length: 128 00:22:25.370 Maximum Source Range Count: 128 00:22:25.370 NGUID/EUI64 Never Reused: No 00:22:25.370 Namespace Write Protected: No 00:22:25.370 Number of LBA Formats: 8 00:22:25.370 Current LBA Format: LBA Format #04 00:22:25.370 LBA Format #00: Data Size: 512 Metadata Size: 0 00:22:25.370 LBA Format #01: Data Size: 512 Metadata Size: 8 00:22:25.370 LBA Format #02: Data Size: 512 Metadata Size: 16 00:22:25.370 LBA Format #03: Data Size: 512 Metadata Size: 64 00:22:25.370 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:22:25.370 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:22:25.370 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:22:25.370 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:22:25.370 00:22:25.370 NVM Specific Namespace Data 00:22:25.370 =========================== 00:22:25.371 Logical Block Storage Tag Mask: 0 00:22:25.371 Protection Information Capabilities: 00:22:25.371 16b Guard Protection Information Storage Tag Support: No 00:22:25.371 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:22:25.371 Storage Tag Check Read Support: No 00:22:25.371 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:25.371 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:25.371 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:25.371 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:25.371 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:25.371 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:25.371 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:25.371 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:25.371 ===================================================== 00:22:25.371 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:22:25.371 ===================================================== 00:22:25.371 Controller Capabilities/Features 00:22:25.371 ================================ 00:22:25.371 Vendor ID: 1b36 00:22:25.371 Subsystem Vendor ID: 1af4 00:22:25.371 Serial Number: 12343 00:22:25.371 Model Number: QEMU NVMe Ctrl 00:22:25.371 Firmware Version: 8.0.0 00:22:25.371 Recommended Arb Burst: 6 00:22:25.371 IEEE OUI Identifier: 00 54 52 00:22:25.371 Multi-path I/O 00:22:25.371 May have multiple subsystem ports: No 00:22:25.371 May have multiple controllers: Yes 00:22:25.371 Associated with SR-IOV VF: No 00:22:25.371 Max Data Transfer Size: 524288 00:22:25.371 Max Number of Namespaces: 256 00:22:25.371 Max Number of I/O Queues: 64 00:22:25.371 NVMe Specification Version (VS): 1.4 00:22:25.371 NVMe Specification Version (Identify): 1.4 00:22:25.371 Maximum Queue Entries: 2048 00:22:25.371 Contiguous Queues Required: Yes 00:22:25.371 Arbitration Mechanisms Supported 00:22:25.371 Weighted Round Robin: Not Supported 00:22:25.371 Vendor Specific: Not Supported 00:22:25.371 Reset Timeout: 7500 ms 00:22:25.371 Doorbell Stride: 4 bytes 00:22:25.371 NVM Subsystem Reset: Not Supported 00:22:25.371 Command Sets Supported 00:22:25.371 NVM Command Set: Supported 00:22:25.371 Boot Partition: Not Supported 00:22:25.371 Memory Page Size Minimum: 4096 bytes 00:22:25.371 Memory Page Size Maximum: 65536 bytes 00:22:25.371 Persistent Memory Region: Not Supported 00:22:25.371 Optional Asynchronous Events Supported 00:22:25.371 Namespace Attribute Notices: Supported 00:22:25.371 Firmware Activation Notices: Not Supported 00:22:25.371 ANA Change Notices: Not Supported 00:22:25.371 PLE Aggregate Log Change Notices: Not Supported 00:22:25.371 LBA Status Info Alert Notices: Not Supported 00:22:25.371 EGE Aggregate Log Change Notices: Not Supported 00:22:25.371 Normal NVM Subsystem Shutdown event: Not Supported 00:22:25.371 Zone Descriptor Change Notices: Not Supported 00:22:25.371 Discovery Log Change Notices: Not Supported 00:22:25.371 Controller Attributes 00:22:25.371 128-bit Host Identifier: Not Supported 00:22:25.371 Non-Operational Permissive Mode: Not Supported 00:22:25.371 NVM Sets: Not Supported 00:22:25.371 Read Recovery Levels: Not Supported 00:22:25.371 Endurance Groups: Supported 00:22:25.371 Predictable Latency Mode: Not Supported 00:22:25.371 Traffic Based Keep ALive: Not Supported 00:22:25.371 Namespace Granularity: Not Supported 00:22:25.371 SQ Associations: Not Supported 00:22:25.371 UUID List: Not Supported 00:22:25.371 Multi-Domain Subsystem: Not Supported 00:22:25.371 Fixed Capacity Management: Not Supported 00:22:25.371 Variable Capacity Management: Not Supported 00:22:25.371 Delete Endurance Group: Not Supported 00:22:25.371 Delete NVM Set: Not Supported 00:22:25.371 Extended LBA Formats Supported: Supported 00:22:25.371 Flexible Data Placement Supported: Supported 00:22:25.371 00:22:25.371 Controller Memory Buffer Support 00:22:25.371 ================================ 00:22:25.371 Supported: No 00:22:25.371 00:22:25.371 Persistent Memory Region Support 00:22:25.371 ================================ 00:22:25.371 Supported: No 00:22:25.371 00:22:25.371 Admin Command Set Attributes 00:22:25.371 ============================ 00:22:25.371 Security Send/Receive: Not Supported 00:22:25.371 Format NVM: Supported 00:22:25.371 Firmware Activate/Download: Not Supported 00:22:25.371 Namespace Management: Supported 00:22:25.371 Device Self-Test: Not Supported 00:22:25.371 Directives: Supported 00:22:25.371 NVMe-MI: Not Supported 00:22:25.371 Virtualization Management: Not Supported 00:22:25.371 Doorbell Buffer Config: Supported 00:22:25.371 Get LBA Status Capability: Not Supported 00:22:25.371 Command & Feature Lockdown Capability: Not Supported 00:22:25.371 Abort Command Limit: 4 00:22:25.371 Async Event Request Limit: 4 00:22:25.371 Number of Firmware Slots: N/A 00:22:25.371 Firmware Slot 1 Read-Only: N/A 00:22:25.371 Firmware Activation Without Reset: N/A 00:22:25.371 Multiple Update Detection Support: N/A 00:22:25.371 Firmware Update Granularity: No Information Provided 00:22:25.371 Per-Namespace SMART Log: Yes 00:22:25.371 Asymmetric Namespace Access Log Page: Not Supported 00:22:25.371 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:22:25.371 Command Effects Log Page: Supported 00:22:25.371 Get Log Page Extended Data: Supported 00:22:25.371 Telemetry Log Pages: Not Supported 00:22:25.371 Persistent Event Log Pages: Not Supported 00:22:25.371 Supported Log Pages Log Page: May Support 00:22:25.371 Commands Supported & Effects Log Page: Not Supported 00:22:25.371 Feature Identifiers & Effects Log Page:May Support 00:22:25.371 NVMe-MI Commands & Effects Log Page: May Support 00:22:25.371 Data Area 4 for Telemetry Log: Not Supported 00:22:25.371 Error Log Page Entries Supported: 1 00:22:25.371 Keep Alive: Not Supported 00:22:25.371 00:22:25.371 NVM Command Set Attributes 00:22:25.371 ========================== 00:22:25.371 Submission Queue Entry Size 00:22:25.371 Max: 64 00:22:25.371 Min: 64 00:22:25.371 Completion Queue Entry Size 00:22:25.371 Max: 16 00:22:25.371 Min: 16 00:22:25.371 Number of Namespaces: 256 00:22:25.371 Compare Command: Supported 00:22:25.371 Write Uncorrectable Command: Not Supported 00:22:25.371 Dataset Management Command: Supported 00:22:25.371 Write Zeroes Command: Supported 00:22:25.371 Set Features Save Field: Supported 00:22:25.371 Reservations: Not Supported 00:22:25.371 Timestamp: Supported 00:22:25.371 Copy: Supported 00:22:25.371 Volatile Write Cache: Present 00:22:25.371 Atomic Write Unit (Normal): 1 00:22:25.371 Atomic Write Unit (PFail): 1 00:22:25.371 Atomic Compare & Write Unit: 1 00:22:25.371 Fused Compare & Write: Not Supported 00:22:25.371 Scatter-Gather List 00:22:25.371 SGL Command Set: Supported 00:22:25.371 SGL Keyed: Not Supported 00:22:25.371 SGL Bit Bucket Descriptor: Not Supported 00:22:25.371 SGL Metadata Pointer: Not Supported 00:22:25.371 Oversized SGL: Not Supported 00:22:25.371 SGL Metadata Address: Not Supported 00:22:25.371 SGL Offset: Not Supported 00:22:25.371 Transport SGL Data Block: Not Supported 00:22:25.371 Replay Protected Memory Block: Not Supported 00:22:25.371 00:22:25.371 Firmware Slot Information 00:22:25.371 ========================= 00:22:25.371 Active slot: 1 00:22:25.371 Slot 1 Firmware Revision: 1.0 00:22:25.371 00:22:25.371 00:22:25.371 Commands Supported and Effects 00:22:25.371 ============================== 00:22:25.371 Admin Commands 00:22:25.371 -------------- 00:22:25.371 Delete I/O Submission Queue (00h): Supported 00:22:25.371 Create I/O Submission Queue (01h): Supported 00:22:25.371 Get Log Page (02h): Supported 00:22:25.371 Delete I/O Completion Queue (04h): Supported 00:22:25.371 Create I/O Completion Queue (05h): Supported 00:22:25.371 Identify (06h): Supported 00:22:25.371 Abort (08h): Supported 00:22:25.371 Set Features (09h): Supported 00:22:25.371 Get Features (0Ah): Supported 00:22:25.371 Asynchronous Event Request (0Ch): Supported 00:22:25.371 Namespace Attachment (15h): Supported NS-Inventory-Change 00:22:25.371 Directive Send (19h): Supported 00:22:25.371 Directive Receive (1Ah): Supported 00:22:25.371 Virtualization Management (1Ch): Supported 00:22:25.371 Doorbell Buffer Config (7Ch): Supported 00:22:25.371 Format NVM (80h): Supported LBA-Change 00:22:25.371 I/O Commands 00:22:25.371 ------------ 00:22:25.371 Flush (00h): Supported LBA-Change 00:22:25.371 Write (01h): Supported LBA-Change 00:22:25.371 Read (02h): Supported 00:22:25.371 Compare (05h): Supported 00:22:25.371 Write Zeroes (08h): Supported LBA-Change 00:22:25.371 Dataset Management (09h): Supported LBA-Change 00:22:25.371 Unknown (0Ch): Supported 00:22:25.371 Unknown (12h): Supported 00:22:25.371 Copy (19h): Supported LBA-Change 00:22:25.371 Unknown (1Dh): Supported LBA-Change 00:22:25.371 00:22:25.371 Error Log 00:22:25.371 ========= 00:22:25.371 00:22:25.371 Arbitration 00:22:25.371 =========== 00:22:25.371 Arbitration Burst: no limit 00:22:25.371 00:22:25.372 Power Management 00:22:25.372 ================ 00:22:25.372 Number of Power States: 1 00:22:25.372 Current Power State: Power State #0 00:22:25.372 Power State #0: 00:22:25.372 Max Power: 25.00 W 00:22:25.372 Non-Operational State: Operational 00:22:25.372 Entry Latency: 16 microseconds 00:22:25.372 Exit Latency: 4 microseconds 00:22:25.372 Relative Read Throughput: 0 00:22:25.372 Relative Read Latency: 0 00:22:25.372 Relative Write Throughput: 0 00:22:25.372 Relative Write Latency: 0 00:22:25.372 Idle Power: Not Reported 00:22:25.372 Active Power: Not Reported 00:22:25.372 Non-Operational Permissive Mode: Not Supported 00:22:25.372 00:22:25.372 Health Information 00:22:25.372 ================== 00:22:25.372 Critical Warnings: 00:22:25.372 Available Spare Space: OK 00:22:25.372 Temperature: OK 00:22:25.372 Device Reliability: OK 00:22:25.372 Read Only: No 00:22:25.372 Volatile Memory Backup: OK 00:22:25.372 Current Temperature: 323 Kelvin (50 Celsius) 00:22:25.372 Temperature Threshold: 343 Kelvin (70 Celsius) 00:22:25.372 Available Spare: 0% 00:22:25.372 Available Spare Threshold: 0% 00:22:25.372 Life Percentage Used: 0% 00:22:25.372 Data Units Read: 776 00:22:25.372 Data Units Written: 705 00:22:25.372 Host Read Commands: 30202 00:22:25.372 Host Write Commands: 29625 00:22:25.372 Controller Busy Time: 0 minutes 00:22:25.372 Power Cycles: 0 00:22:25.372 Power On Hours: 0 hours 00:22:25.372 Unsafe Shutdowns: 0 00:22:25.372 Unrecoverable Media Errors: 0 00:22:25.372 Lifetime Error Log Entries: 0 00:22:25.372 Warning Temperature Time: 0 minutes 00:22:25.372 Critical Temperature Time: 0 minutes 00:22:25.372 00:22:25.372 Number of Queues 00:22:25.372 ================ 00:22:25.372 Number of I/O Submission Queues: 64 00:22:25.372 Number of I/O Completion Queues: 64 00:22:25.372 00:22:25.372 ZNS Specific Controller Data 00:22:25.372 ============================ 00:22:25.372 Zone Append Size Limit: 0 00:22:25.372 00:22:25.372 00:22:25.372 Active Namespaces 00:22:25.372 ================= 00:22:25.372 Namespace ID:1 00:22:25.372 Error Recovery Timeout: Unlimited 00:22:25.372 Command Set Identifier: NVM (00h) 00:22:25.372 Deallocate: Supported 00:22:25.372 Deallocated/Unwritten Error: Supported 00:22:25.372 Deallocated Read Value: All 0x00 00:22:25.372 Deallocate in Write Zeroes: Not Supported 00:22:25.372 Deallocated Guard Field: 0xFFFF 00:22:25.372 Flush: Supported 00:22:25.372 Reservation: Not Supported 00:22:25.372 Namespace Sharing Capabilities: Multiple Controllers 00:22:25.372 Size (in LBAs): 262144 (1GiB) 00:22:25.372 Capacity (in LBAs): 262144 (1GiB) 00:22:25.372 Utilization (in LBAs): 262144 (1GiB) 00:22:25.372 Thin Provisioning: Not Supported 00:22:25.372 Per-NS Atomic Units: No 00:22:25.372 Maximum Single Source Range Length: 128 00:22:25.372 Maximum Copy Length: 128 00:22:25.372 Maximum Source Range Count: 128 00:22:25.372 NGUID/EUI64 Never Reused: No 00:22:25.372 Namespace Write Protected: No 00:22:25.372 Endurance group ID: 1 00:22:25.372 Number of LBA Formats: 8 00:22:25.372 Current LBA Format: LBA Format #04 00:22:25.372 LBA Format #00: Data Size: 512 Metadata Size: 0 00:22:25.372 LBA Format #01: Data Size: 512 Metadata Size: 8 00:22:25.372 LBA Format #02: Data Size: 512 Metadata Size: 16 00:22:25.372 LBA Format #03: Data Size: 512 Metadata Size: 64 00:22:25.372 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:22:25.372 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:22:25.372 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:22:25.372 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:22:25.372 00:22:25.372 Get Feature FDP: 00:22:25.372 ================ 00:22:25.372 Enabled: Yes 00:22:25.372 FDP configuration index: 0 00:22:25.372 00:22:25.372 FDP configurations log page 00:22:25.372 =========================== 00:22:25.372 Number of FDP configurations: 1 00:22:25.372 Version: 0 00:22:25.372 Size: 112 00:22:25.372 FDP Configuration Descriptor: 0 00:22:25.372 Descriptor Size: 96 00:22:25.372 Reclaim Group Identifier format: 2 00:22:25.372 FDP Volatile Write Cache: Not Present 00:22:25.372 FDP Configuration: Valid 00:22:25.372 Vendor Specific Size: 0 00:22:25.372 Number of Reclaim Groups: 2 00:22:25.372 Number of Recalim Unit Handles: 8 00:22:25.372 Max Placement Identifiers: 128 00:22:25.372 Number of Namespaces Suppprted: 256 00:22:25.372 Reclaim unit Nominal Size: 6000000 bytes 00:22:25.372 Estimated Reclaim Unit Time Limit: Not Reported 00:22:25.372 RUH Desc #000: RUH Type: Initially Isolated 00:22:25.372 RUH Desc #001: RUH Type: Initially Isolated 00:22:25.372 RUH Desc #002: RUH Type: Initially Isolated 00:22:25.372 RUH Desc #003: RUH Type: Initially Isolated 00:22:25.372 RUH Desc #004: RUH Type: Initially Isolated 00:22:25.372 RUH Desc #005: RUH Type: Initially Isolated 00:22:25.372 RUH Desc #006: RUH Type: Initially Isolated 00:22:25.372 RUH Desc #007: RUH Type: Initially Isolated 00:22:25.372 00:22:25.372 FDP reclaim unit handle usage log page 00:22:25.372 ====================================== 00:22:25.372 Number of Reclaim Unit Handles: 8 00:22:25.372 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:22:25.372 RUH Usage Desc #001: RUH Attributes: Unused 00:22:25.372 RUH Usage Desc #002: RUH Attributes: Unused 00:22:25.372 RUH Usage Desc #003: RUH Attributes: Unused 00:22:25.372 RUH Usage Desc #004: RUH Attributes: Unused 00:22:25.372 [2024-12-09 23:04:03.646066] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 63185 terminated unexpected 00:22:25.372 RUH Usage Desc #005: RUH Attributes: Unused 00:22:25.372 RUH Usage Desc #006: RUH Attributes: Unused 00:22:25.372 RUH Usage Desc #007: RUH Attributes: Unused 00:22:25.372 00:22:25.372 FDP statistics log page 00:22:25.372 ======================= 00:22:25.372 Host bytes with metadata written: 429826048 00:22:25.372 Media bytes with metadata written: 429891584 00:22:25.372 Media bytes erased: 0 00:22:25.372 00:22:25.372 FDP events log page 00:22:25.372 =================== 00:22:25.372 Number of FDP events: 0 00:22:25.372 00:22:25.372 NVM Specific Namespace Data 00:22:25.372 =========================== 00:22:25.372 Logical Block Storage Tag Mask: 0 00:22:25.372 Protection Information Capabilities: 00:22:25.372 16b Guard Protection Information Storage Tag Support: No 00:22:25.372 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:22:25.372 Storage Tag Check Read Support: No 00:22:25.372 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:25.372 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:25.372 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:25.372 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:25.372 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:25.372 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:25.372 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:25.372 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:25.372 ===================================================== 00:22:25.372 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:22:25.372 ===================================================== 00:22:25.372 Controller Capabilities/Features 00:22:25.372 ================================ 00:22:25.372 Vendor ID: 1b36 00:22:25.372 Subsystem Vendor ID: 1af4 00:22:25.372 Serial Number: 12340 00:22:25.372 Model Number: QEMU NVMe Ctrl 00:22:25.372 Firmware Version: 8.0.0 00:22:25.372 Recommended Arb Burst: 6 00:22:25.372 IEEE OUI Identifier: 00 54 52 00:22:25.372 Multi-path I/O 00:22:25.372 May have multiple subsystem ports: No 00:22:25.372 May have multiple controllers: No 00:22:25.372 Associated with SR-IOV VF: No 00:22:25.372 Max Data Transfer Size: 524288 00:22:25.372 Max Number of Namespaces: 256 00:22:25.372 Max Number of I/O Queues: 64 00:22:25.372 NVMe Specification Version (VS): 1.4 00:22:25.372 NVMe Specification Version (Identify): 1.4 00:22:25.372 Maximum Queue Entries: 2048 00:22:25.372 Contiguous Queues Required: Yes 00:22:25.372 Arbitration Mechanisms Supported 00:22:25.372 Weighted Round Robin: Not Supported 00:22:25.372 Vendor Specific: Not Supported 00:22:25.372 Reset Timeout: 7500 ms 00:22:25.372 Doorbell Stride: 4 bytes 00:22:25.372 NVM Subsystem Reset: Not Supported 00:22:25.372 Command Sets Supported 00:22:25.372 NVM Command Set: Supported 00:22:25.372 Boot Partition: Not Supported 00:22:25.372 Memory Page Size Minimum: 4096 bytes 00:22:25.372 Memory Page Size Maximum: 65536 bytes 00:22:25.372 Persistent Memory Region: Not Supported 00:22:25.372 Optional Asynchronous Events Supported 00:22:25.372 Namespace Attribute Notices: Supported 00:22:25.372 Firmware Activation Notices: Not Supported 00:22:25.372 ANA Change Notices: Not Supported 00:22:25.372 PLE Aggregate Log Change Notices: Not Supported 00:22:25.372 LBA Status Info Alert Notices: Not Supported 00:22:25.372 EGE Aggregate Log Change Notices: Not Supported 00:22:25.372 Normal NVM Subsystem Shutdown event: Not Supported 00:22:25.372 Zone Descriptor Change Notices: Not Supported 00:22:25.373 Discovery Log Change Notices: Not Supported 00:22:25.373 Controller Attributes 00:22:25.373 128-bit Host Identifier: Not Supported 00:22:25.373 Non-Operational Permissive Mode: Not Supported 00:22:25.373 NVM Sets: Not Supported 00:22:25.373 Read Recovery Levels: Not Supported 00:22:25.373 Endurance Groups: Not Supported 00:22:25.373 Predictable Latency Mode: Not Supported 00:22:25.373 Traffic Based Keep ALive: Not Supported 00:22:25.373 Namespace Granularity: Not Supported 00:22:25.373 SQ Associations: Not Supported 00:22:25.373 UUID List: Not Supported 00:22:25.373 Multi-Domain Subsystem: Not Supported 00:22:25.373 Fixed Capacity Management: Not Supported 00:22:25.373 Variable Capacity Management: Not Supported 00:22:25.373 Delete Endurance Group: Not Supported 00:22:25.373 Delete NVM Set: Not Supported 00:22:25.373 Extended LBA Formats Supported: Supported 00:22:25.373 Flexible Data Placement Supported: Not Supported 00:22:25.373 00:22:25.373 Controller Memory Buffer Support 00:22:25.373 ================================ 00:22:25.373 Supported: No 00:22:25.373 00:22:25.373 Persistent Memory Region Support 00:22:25.373 ================================ 00:22:25.373 Supported: No 00:22:25.373 00:22:25.373 Admin Command Set Attributes 00:22:25.373 ============================ 00:22:25.373 Security Send/Receive: Not Supported 00:22:25.373 Format NVM: Supported 00:22:25.373 Firmware Activate/Download: Not Supported 00:22:25.373 Namespace Management: Supported 00:22:25.373 Device Self-Test: Not Supported 00:22:25.373 Directives: Supported 00:22:25.373 NVMe-MI: Not Supported 00:22:25.373 Virtualization Management: Not Supported 00:22:25.373 Doorbell Buffer Config: Supported 00:22:25.373 Get LBA Status Capability: Not Supported 00:22:25.373 Command & Feature Lockdown Capability: Not Supported 00:22:25.373 Abort Command Limit: 4 00:22:25.373 Async Event Request Limit: 4 00:22:25.373 Number of Firmware Slots: N/A 00:22:25.373 Firmware Slot 1 Read-Only: N/A 00:22:25.373 Firmware Activation Without Reset: N/A 00:22:25.373 Multiple Update Detection Support: N/A 00:22:25.373 Firmware Update Granularity: No Information Provided 00:22:25.373 Per-Namespace SMART Log: Yes 00:22:25.373 Asymmetric Namespace Access Log Page: Not Supported 00:22:25.373 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:22:25.373 Command Effects Log Page: Supported 00:22:25.373 Get Log Page Extended Data: Supported 00:22:25.373 Telemetry Log Pages: Not Supported 00:22:25.373 Persistent Event Log Pages: Not Supported 00:22:25.373 Supported Log Pages Log Page: May Support 00:22:25.373 Commands Supported & Effects Log Page: Not Supported 00:22:25.373 Feature Identifiers & Effects Log Page:May Support 00:22:25.373 NVMe-MI Commands & Effects Log Page: May Support 00:22:25.373 Data Area 4 for Telemetry Log: Not Supported 00:22:25.373 Error Log Page Entries Supported: 1 00:22:25.373 Keep Alive: Not Supported 00:22:25.373 00:22:25.373 NVM Command Set Attributes 00:22:25.373 ========================== 00:22:25.373 Submission Queue Entry Size 00:22:25.373 Max: 64 00:22:25.373 Min: 64 00:22:25.373 Completion Queue Entry Size 00:22:25.373 Max: 16 00:22:25.373 Min: 16 00:22:25.373 Number of Namespaces: 256 00:22:25.373 Compare Command: Supported 00:22:25.373 Write Uncorrectable Command: Not Supported 00:22:25.373 Dataset Management Command: Supported 00:22:25.373 Write Zeroes Command: Supported 00:22:25.373 Set Features Save Field: Supported 00:22:25.373 Reservations: Not Supported 00:22:25.373 Timestamp: Supported 00:22:25.373 Copy: Supported 00:22:25.373 Volatile Write Cache: Present 00:22:25.373 Atomic Write Unit (Normal): 1 00:22:25.373 Atomic Write Unit (PFail): 1 00:22:25.373 Atomic Compare & Write Unit: 1 00:22:25.373 Fused Compare & Write: Not Supported 00:22:25.373 Scatter-Gather List 00:22:25.373 SGL Command Set: Supported 00:22:25.373 SGL Keyed: Not Supported 00:22:25.373 SGL Bit Bucket Descriptor: Not Supported 00:22:25.373 SGL Metadata Pointer: Not Supported 00:22:25.373 Oversized SGL: Not Supported 00:22:25.373 SGL Metadata Address: Not Supported 00:22:25.373 SGL Offset: Not Supported 00:22:25.373 Transport SGL Data Block: Not Supported 00:22:25.373 Replay Protected Memory Block: Not Supported 00:22:25.373 00:22:25.373 Firmware Slot Information 00:22:25.373 ========================= 00:22:25.373 Active slot: 1 00:22:25.373 Slot 1 Firmware Revision: 1.0 00:22:25.373 00:22:25.373 00:22:25.373 Commands Supported and Effects 00:22:25.373 ============================== 00:22:25.373 Admin Commands 00:22:25.373 -------------- 00:22:25.373 Delete I/O Submission Queue (00h): Supported 00:22:25.373 Create I/O Submission Queue (01h): Supported 00:22:25.373 Get Log Page (02h): Supported 00:22:25.373 Delete I/O Completion Queue (04h): Supported 00:22:25.373 Create I/O Completion Queue (05h): Supported 00:22:25.373 Identify (06h): Supported 00:22:25.373 Abort (08h): Supported 00:22:25.373 Set Features (09h): Supported 00:22:25.373 Get Features (0Ah): Supported 00:22:25.373 Asynchronous Event Request (0Ch): Supported 00:22:25.373 Namespace Attachment (15h): Supported NS-Inventory-Change 00:22:25.373 Directive Send (19h): Supported 00:22:25.373 Directive Receive (1Ah): Supported 00:22:25.373 Virtualization Management (1Ch): Supported 00:22:25.373 Doorbell Buffer Config (7Ch): Supported 00:22:25.373 Format NVM (80h): Supported LBA-Change 00:22:25.373 I/O Commands 00:22:25.373 ------------ 00:22:25.373 Flush (00h): Supported LBA-Change 00:22:25.373 Write (01h): Supported LBA-Change 00:22:25.373 Read (02h): Supported 00:22:25.373 Compare (05h): Supported 00:22:25.373 Write Zeroes (08h): Supported LBA-Change 00:22:25.373 Dataset Management (09h): Supported LBA-Change 00:22:25.373 Unknown (0Ch): Supported 00:22:25.373 Unknown (12h): Supported 00:22:25.373 Copy (19h): Supported LBA-Change 00:22:25.373 Unknown (1Dh): Supported LBA-Change 00:22:25.373 00:22:25.373 Error Log 00:22:25.373 ========= 00:22:25.373 00:22:25.373 Arbitration 00:22:25.373 =========== 00:22:25.373 Arbitration Burst: no limit 00:22:25.373 00:22:25.373 Power Management 00:22:25.373 ================ 00:22:25.373 Number of Power States: 1 00:22:25.373 Current Power State: Power State #0 00:22:25.373 Power State #0: 00:22:25.373 Max Power: 25.00 W 00:22:25.373 Non-Operational State: Operational 00:22:25.373 Entry Latency: 16 microseconds 00:22:25.373 Exit Latency: 4 microseconds 00:22:25.373 Relative Read Throughput: 0 00:22:25.373 Relative Read Latency: 0 00:22:25.373 Relative Write Throughput: 0 00:22:25.373 Relative Write Latency: 0 00:22:25.373 Idle Power: Not Reported 00:22:25.373 Active Power: Not Reported 00:22:25.373 Non-Operational Permissive Mode: Not Supported 00:22:25.373 00:22:25.373 Health Information 00:22:25.373 ================== 00:22:25.373 Critical Warnings: 00:22:25.373 Available Spare Space: OK 00:22:25.373 Temperature: OK 00:22:25.373 Device Reliability: OK 00:22:25.373 Read Only: No 00:22:25.373 Volatile Memory Backup: OK 00:22:25.373 Current Temperature: 323 Kelvin (50 Celsius) 00:22:25.373 Temperature Threshold: 343 Kelvin (70 Celsius) 00:22:25.373 Available Spare: 0% 00:22:25.373 Available Spare Threshold: 0% 00:22:25.373 Life Percentage Used: 0% 00:22:25.373 Data Units Read: 729 00:22:25.373 Data Units Written: 657 00:22:25.373 Host Read Commands: 29560 00:22:25.373 Host Write Commands: 29346 00:22:25.373 Controller Busy Time: 0 minutes 00:22:25.373 Power Cycles: 0 00:22:25.373 Power On Hours: 0 hours 00:22:25.373 Unsafe Shutdowns: 0 00:22:25.373 Unrecoverable Media Errors: 0 00:22:25.373 Lifetime Error Log Entries: 0 00:22:25.373 Warning Temperature Time: 0 minutes 00:22:25.373 Critical Temperature Time: 0 minutes 00:22:25.373 00:22:25.373 Number of Queues 00:22:25.373 ================ 00:22:25.373 Number of I/O Submission Queues: 64 00:22:25.373 Number of I/O Completion Queues: 64 00:22:25.373 00:22:25.373 ZNS Specific Controller Data 00:22:25.373 ============================ 00:22:25.373 Zone Append Size Limit: 0 00:22:25.373 00:22:25.373 00:22:25.373 Active Namespaces 00:22:25.373 ================= 00:22:25.373 Namespace ID:1 00:22:25.373 Error Recovery Timeout: Unlimited 00:22:25.373 Command Set Identifier: NVM (00h) 00:22:25.373 Deallocate: Supported 00:22:25.373 Deallocated/Unwritten Error: Supported 00:22:25.373 Deallocated Read Value: All 0x00 00:22:25.373 Deallocate in Write Zeroes: Not Supported 00:22:25.373 Deallocated Guard Field: 0xFFFF 00:22:25.373 Flush: Supported 00:22:25.373 Reservation: Not Supported 00:22:25.373 Metadata Transferred as: Separate Metadata Buffer 00:22:25.374 Namespace Sharing Capabilities: Private 00:22:25.374 Size (in LBAs): 1548666 (5GiB) 00:22:25.374 Capacity (in LBAs): 1548666 (5GiB) 00:22:25.374 Utilization (in LBAs): 1548666 (5GiB) 00:22:25.374 Thin Provisioning: Not Supported 00:22:25.374 Per-NS Atomic Units: No 00:22:25.374 Maximum Single Source Range Length: 128 00:22:25.374 Maximum Copy Length: 128 00:22:25.374 Maximum Source Range Count: 128 00:22:25.374 NGUID/EUI64 Never Reused: No 00:22:25.374 Namespace Write Protected: No 00:22:25.374 Number of LBA Formats: 8 00:22:25.374 Current LBA Format: LBA Format #07 00:22:25.374 LBA Format #00: Data Size: 512 Metadata Size: 0 00:22:25.374 LBA Format #01: Data Size: 512 Metadata Size: 8 00:22:25.374 LBA Format #02: Data Size: 512 Metadata Size: 16 00:22:25.374 LBA Format #03: Data Size: 512 Metadata Size: 64 00:22:25.374 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:22:25.374 LBA Form[2024-12-09 23:04:03.647087] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 63185 terminated unexpected 00:22:25.374 at #05: Data Size: 4096 Metadata Size: 8 00:22:25.374 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:22:25.374 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:22:25.374 00:22:25.374 NVM Specific Namespace Data 00:22:25.374 =========================== 00:22:25.374 Logical Block Storage Tag Mask: 0 00:22:25.374 Protection Information Capabilities: 00:22:25.374 16b Guard Protection Information Storage Tag Support: No 00:22:25.374 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:22:25.374 Storage Tag Check Read Support: No 00:22:25.374 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:25.374 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:25.374 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:25.374 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:25.374 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:25.374 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:25.374 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:25.374 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:25.374 ===================================================== 00:22:25.374 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:22:25.374 ===================================================== 00:22:25.374 Controller Capabilities/Features 00:22:25.374 ================================ 00:22:25.374 Vendor ID: 1b36 00:22:25.374 Subsystem Vendor ID: 1af4 00:22:25.374 Serial Number: 12342 00:22:25.374 Model Number: QEMU NVMe Ctrl 00:22:25.374 Firmware Version: 8.0.0 00:22:25.374 Recommended Arb Burst: 6 00:22:25.374 IEEE OUI Identifier: 00 54 52 00:22:25.374 Multi-path I/O 00:22:25.374 May have multiple subsystem ports: No 00:22:25.374 May have multiple controllers: No 00:22:25.374 Associated with SR-IOV VF: No 00:22:25.374 Max Data Transfer Size: 524288 00:22:25.374 Max Number of Namespaces: 256 00:22:25.374 Max Number of I/O Queues: 64 00:22:25.374 NVMe Specification Version (VS): 1.4 00:22:25.374 NVMe Specification Version (Identify): 1.4 00:22:25.374 Maximum Queue Entries: 2048 00:22:25.374 Contiguous Queues Required: Yes 00:22:25.374 Arbitration Mechanisms Supported 00:22:25.374 Weighted Round Robin: Not Supported 00:22:25.374 Vendor Specific: Not Supported 00:22:25.374 Reset Timeout: 7500 ms 00:22:25.374 Doorbell Stride: 4 bytes 00:22:25.374 NVM Subsystem Reset: Not Supported 00:22:25.374 Command Sets Supported 00:22:25.374 NVM Command Set: Supported 00:22:25.374 Boot Partition: Not Supported 00:22:25.374 Memory Page Size Minimum: 4096 bytes 00:22:25.374 Memory Page Size Maximum: 65536 bytes 00:22:25.374 Persistent Memory Region: Not Supported 00:22:25.374 Optional Asynchronous Events Supported 00:22:25.374 Namespace Attribute Notices: Supported 00:22:25.374 Firmware Activation Notices: Not Supported 00:22:25.374 ANA Change Notices: Not Supported 00:22:25.374 PLE Aggregate Log Change Notices: Not Supported 00:22:25.374 LBA Status Info Alert Notices: Not Supported 00:22:25.374 EGE Aggregate Log Change Notices: Not Supported 00:22:25.374 Normal NVM Subsystem Shutdown event: Not Supported 00:22:25.374 Zone Descriptor Change Notices: Not Supported 00:22:25.374 Discovery Log Change Notices: Not Supported 00:22:25.374 Controller Attributes 00:22:25.374 128-bit Host Identifier: Not Supported 00:22:25.374 Non-Operational Permissive Mode: Not Supported 00:22:25.374 NVM Sets: Not Supported 00:22:25.374 Read Recovery Levels: Not Supported 00:22:25.374 Endurance Groups: Not Supported 00:22:25.374 Predictable Latency Mode: Not Supported 00:22:25.374 Traffic Based Keep ALive: Not Supported 00:22:25.374 Namespace Granularity: Not Supported 00:22:25.374 SQ Associations: Not Supported 00:22:25.374 UUID List: Not Supported 00:22:25.374 Multi-Domain Subsystem: Not Supported 00:22:25.374 Fixed Capacity Management: Not Supported 00:22:25.374 Variable Capacity Management: Not Supported 00:22:25.374 Delete Endurance Group: Not Supported 00:22:25.374 Delete NVM Set: Not Supported 00:22:25.374 Extended LBA Formats Supported: Supported 00:22:25.374 Flexible Data Placement Supported: Not Supported 00:22:25.374 00:22:25.374 Controller Memory Buffer Support 00:22:25.374 ================================ 00:22:25.374 Supported: No 00:22:25.374 00:22:25.374 Persistent Memory Region Support 00:22:25.374 ================================ 00:22:25.374 Supported: No 00:22:25.374 00:22:25.374 Admin Command Set Attributes 00:22:25.374 ============================ 00:22:25.374 Security Send/Receive: Not Supported 00:22:25.374 Format NVM: Supported 00:22:25.374 Firmware Activate/Download: Not Supported 00:22:25.374 Namespace Management: Supported 00:22:25.374 Device Self-Test: Not Supported 00:22:25.374 Directives: Supported 00:22:25.374 NVMe-MI: Not Supported 00:22:25.374 Virtualization Management: Not Supported 00:22:25.374 Doorbell Buffer Config: Supported 00:22:25.374 Get LBA Status Capability: Not Supported 00:22:25.374 Command & Feature Lockdown Capability: Not Supported 00:22:25.374 Abort Command Limit: 4 00:22:25.374 Async Event Request Limit: 4 00:22:25.374 Number of Firmware Slots: N/A 00:22:25.374 Firmware Slot 1 Read-Only: N/A 00:22:25.374 Firmware Activation Without Reset: N/A 00:22:25.374 Multiple Update Detection Support: N/A 00:22:25.374 Firmware Update Granularity: No Information Provided 00:22:25.374 Per-Namespace SMART Log: Yes 00:22:25.374 Asymmetric Namespace Access Log Page: Not Supported 00:22:25.374 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:22:25.374 Command Effects Log Page: Supported 00:22:25.374 Get Log Page Extended Data: Supported 00:22:25.374 Telemetry Log Pages: Not Supported 00:22:25.374 Persistent Event Log Pages: Not Supported 00:22:25.374 Supported Log Pages Log Page: May Support 00:22:25.374 Commands Supported & Effects Log Page: Not Supported 00:22:25.374 Feature Identifiers & Effects Log Page:May Support 00:22:25.374 NVMe-MI Commands & Effects Log Page: May Support 00:22:25.374 Data Area 4 for Telemetry Log: Not Supported 00:22:25.374 Error Log Page Entries Supported: 1 00:22:25.374 Keep Alive: Not Supported 00:22:25.374 00:22:25.374 NVM Command Set Attributes 00:22:25.374 ========================== 00:22:25.374 Submission Queue Entry Size 00:22:25.374 Max: 64 00:22:25.374 Min: 64 00:22:25.374 Completion Queue Entry Size 00:22:25.374 Max: 16 00:22:25.374 Min: 16 00:22:25.374 Number of Namespaces: 256 00:22:25.374 Compare Command: Supported 00:22:25.374 Write Uncorrectable Command: Not Supported 00:22:25.374 Dataset Management Command: Supported 00:22:25.374 Write Zeroes Command: Supported 00:22:25.374 Set Features Save Field: Supported 00:22:25.374 Reservations: Not Supported 00:22:25.374 Timestamp: Supported 00:22:25.374 Copy: Supported 00:22:25.374 Volatile Write Cache: Present 00:22:25.374 Atomic Write Unit (Normal): 1 00:22:25.375 Atomic Write Unit (PFail): 1 00:22:25.375 Atomic Compare & Write Unit: 1 00:22:25.375 Fused Compare & Write: Not Supported 00:22:25.375 Scatter-Gather List 00:22:25.375 SGL Command Set: Supported 00:22:25.375 SGL Keyed: Not Supported 00:22:25.375 SGL Bit Bucket Descriptor: Not Supported 00:22:25.375 SGL Metadata Pointer: Not Supported 00:22:25.375 Oversized SGL: Not Supported 00:22:25.375 SGL Metadata Address: Not Supported 00:22:25.375 SGL Offset: Not Supported 00:22:25.375 Transport SGL Data Block: Not Supported 00:22:25.375 Replay Protected Memory Block: Not Supported 00:22:25.375 00:22:25.375 Firmware Slot Information 00:22:25.375 ========================= 00:22:25.375 Active slot: 1 00:22:25.375 Slot 1 Firmware Revision: 1.0 00:22:25.375 00:22:25.375 00:22:25.375 Commands Supported and Effects 00:22:25.375 ============================== 00:22:25.375 Admin Commands 00:22:25.375 -------------- 00:22:25.375 Delete I/O Submission Queue (00h): Supported 00:22:25.375 Create I/O Submission Queue (01h): Supported 00:22:25.375 Get Log Page (02h): Supported 00:22:25.375 Delete I/O Completion Queue (04h): Supported 00:22:25.375 Create I/O Completion Queue (05h): Supported 00:22:25.375 Identify (06h): Supported 00:22:25.375 Abort (08h): Supported 00:22:25.375 Set Features (09h): Supported 00:22:25.375 Get Features (0Ah): Supported 00:22:25.375 Asynchronous Event Request (0Ch): Supported 00:22:25.375 Namespace Attachment (15h): Supported NS-Inventory-Change 00:22:25.375 Directive Send (19h): Supported 00:22:25.375 Directive Receive (1Ah): Supported 00:22:25.375 Virtualization Management (1Ch): Supported 00:22:25.375 Doorbell Buffer Config (7Ch): Supported 00:22:25.375 Format NVM (80h): Supported LBA-Change 00:22:25.375 I/O Commands 00:22:25.375 ------------ 00:22:25.375 Flush (00h): Supported LBA-Change 00:22:25.375 Write (01h): Supported LBA-Change 00:22:25.375 Read (02h): Supported 00:22:25.375 Compare (05h): Supported 00:22:25.375 Write Zeroes (08h): Supported LBA-Change 00:22:25.375 Dataset Management (09h): Supported LBA-Change 00:22:25.375 Unknown (0Ch): Supported 00:22:25.375 Unknown (12h): Supported 00:22:25.375 Copy (19h): Supported LBA-Change 00:22:25.375 Unknown (1Dh): Supported LBA-Change 00:22:25.375 00:22:25.375 Error Log 00:22:25.375 ========= 00:22:25.375 00:22:25.375 Arbitration 00:22:25.375 =========== 00:22:25.375 Arbitration Burst: no limit 00:22:25.375 00:22:25.375 Power Management 00:22:25.375 ================ 00:22:25.375 Number of Power States: 1 00:22:25.375 Current Power State: Power State #0 00:22:25.375 Power State #0: 00:22:25.375 Max Power: 25.00 W 00:22:25.375 Non-Operational State: Operational 00:22:25.375 Entry Latency: 16 microseconds 00:22:25.375 Exit Latency: 4 microseconds 00:22:25.375 Relative Read Throughput: 0 00:22:25.375 Relative Read Latency: 0 00:22:25.375 Relative Write Throughput: 0 00:22:25.375 Relative Write Latency: 0 00:22:25.375 Idle Power: Not Reported 00:22:25.375 Active Power: Not Reported 00:22:25.375 Non-Operational Permissive Mode: Not Supported 00:22:25.375 00:22:25.375 Health Information 00:22:25.375 ================== 00:22:25.375 Critical Warnings: 00:22:25.375 Available Spare Space: OK 00:22:25.375 Temperature: OK 00:22:25.375 Device Reliability: OK 00:22:25.375 Read Only: No 00:22:25.375 Volatile Memory Backup: OK 00:22:25.375 Current Temperature: 323 Kelvin (50 Celsius) 00:22:25.375 Temperature Threshold: 343 Kelvin (70 Celsius) 00:22:25.375 Available Spare: 0% 00:22:25.375 Available Spare Threshold: 0% 00:22:25.375 Life Percentage Used: 0% 00:22:25.375 Data Units Read: 2255 00:22:25.375 Data Units Written: 2042 00:22:25.375 Host Read Commands: 90009 00:22:25.375 Host Write Commands: 88278 00:22:25.375 Controller Busy Time: 0 minutes 00:22:25.375 Power Cycles: 0 00:22:25.375 Power On Hours: 0 hours 00:22:25.375 Unsafe Shutdowns: 0 00:22:25.375 Unrecoverable Media Errors: 0 00:22:25.375 Lifetime Error Log Entries: 0 00:22:25.375 Warning Temperature Time: 0 minutes 00:22:25.375 Critical Temperature Time: 0 minutes 00:22:25.375 00:22:25.375 Number of Queues 00:22:25.375 ================ 00:22:25.375 Number of I/O Submission Queues: 64 00:22:25.375 Number of I/O Completion Queues: 64 00:22:25.375 00:22:25.375 ZNS Specific Controller Data 00:22:25.375 ============================ 00:22:25.375 Zone Append Size Limit: 0 00:22:25.375 00:22:25.375 00:22:25.375 Active Namespaces 00:22:25.375 ================= 00:22:25.375 Namespace ID:1 00:22:25.375 Error Recovery Timeout: Unlimited 00:22:25.375 Command Set Identifier: NVM (00h) 00:22:25.375 Deallocate: Supported 00:22:25.375 Deallocated/Unwritten Error: Supported 00:22:25.375 Deallocated Read Value: All 0x00 00:22:25.375 Deallocate in Write Zeroes: Not Supported 00:22:25.375 Deallocated Guard Field: 0xFFFF 00:22:25.375 Flush: Supported 00:22:25.375 Reservation: Not Supported 00:22:25.375 Namespace Sharing Capabilities: Private 00:22:25.375 Size (in LBAs): 1048576 (4GiB) 00:22:25.375 Capacity (in LBAs): 1048576 (4GiB) 00:22:25.375 Utilization (in LBAs): 1048576 (4GiB) 00:22:25.375 Thin Provisioning: Not Supported 00:22:25.375 Per-NS Atomic Units: No 00:22:25.375 Maximum Single Source Range Length: 128 00:22:25.375 Maximum Copy Length: 128 00:22:25.375 Maximum Source Range Count: 128 00:22:25.375 NGUID/EUI64 Never Reused: No 00:22:25.375 Namespace Write Protected: No 00:22:25.375 Number of LBA Formats: 8 00:22:25.375 Current LBA Format: LBA Format #04 00:22:25.375 LBA Format #00: Data Size: 512 Metadata Size: 0 00:22:25.375 LBA Format #01: Data Size: 512 Metadata Size: 8 00:22:25.375 LBA Format #02: Data Size: 512 Metadata Size: 16 00:22:25.375 LBA Format #03: Data Size: 512 Metadata Size: 64 00:22:25.375 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:22:25.375 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:22:25.375 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:22:25.375 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:22:25.375 00:22:25.375 NVM Specific Namespace Data 00:22:25.375 =========================== 00:22:25.375 Logical Block Storage Tag Mask: 0 00:22:25.375 Protection Information Capabilities: 00:22:25.375 16b Guard Protection Information Storage Tag Support: No 00:22:25.375 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:22:25.375 Storage Tag Check Read Support: No 00:22:25.375 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:25.375 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:25.375 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:25.375 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:25.375 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:25.375 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:25.375 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:25.375 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:25.375 Namespace ID:2 00:22:25.375 Error Recovery Timeout: Unlimited 00:22:25.375 Command Set Identifier: NVM (00h) 00:22:25.375 Deallocate: Supported 00:22:25.375 Deallocated/Unwritten Error: Supported 00:22:25.375 Deallocated Read Value: All 0x00 00:22:25.375 Deallocate in Write Zeroes: Not Supported 00:22:25.375 Deallocated Guard Field: 0xFFFF 00:22:25.375 Flush: Supported 00:22:25.375 Reservation: Not Supported 00:22:25.375 Namespace Sharing Capabilities: Private 00:22:25.375 Size (in LBAs): 1048576 (4GiB) 00:22:25.375 Capacity (in LBAs): 1048576 (4GiB) 00:22:25.375 Utilization (in LBAs): 1048576 (4GiB) 00:22:25.375 Thin Provisioning: Not Supported 00:22:25.375 Per-NS Atomic Units: No 00:22:25.375 Maximum Single Source Range Length: 128 00:22:25.375 Maximum Copy Length: 128 00:22:25.375 Maximum Source Range Count: 128 00:22:25.375 NGUID/EUI64 Never Reused: No 00:22:25.375 Namespace Write Protected: No 00:22:25.375 Number of LBA Formats: 8 00:22:25.375 Current LBA Format: LBA Format #04 00:22:25.375 LBA Format #00: Data Size: 512 Metadata Size: 0 00:22:25.375 LBA Format #01: Data Size: 512 Metadata Size: 8 00:22:25.375 LBA Format #02: Data Size: 512 Metadata Size: 16 00:22:25.375 LBA Format #03: Data Size: 512 Metadata Size: 64 00:22:25.375 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:22:25.375 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:22:25.375 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:22:25.375 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:22:25.375 00:22:25.375 NVM Specific Namespace Data 00:22:25.375 =========================== 00:22:25.375 Logical Block Storage Tag Mask: 0 00:22:25.375 Protection Information Capabilities: 00:22:25.375 16b Guard Protection Information Storage Tag Support: No 00:22:25.375 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:22:25.375 Storage Tag Check Read Support: No 00:22:25.375 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:25.375 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:25.375 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:25.376 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:25.376 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:25.376 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:25.376 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:25.376 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:25.376 Namespace ID:3 00:22:25.376 Error Recovery Timeout: Unlimited 00:22:25.376 Command Set Identifier: NVM (00h) 00:22:25.376 Deallocate: Supported 00:22:25.376 Deallocated/Unwritten Error: Supported 00:22:25.376 Deallocated Read Value: All 0x00 00:22:25.376 Deallocate in Write Zeroes: Not Supported 00:22:25.376 Deallocated Guard Field: 0xFFFF 00:22:25.376 Flush: Supported 00:22:25.376 Reservation: Not Supported 00:22:25.376 Namespace Sharing Capabilities: Private 00:22:25.376 Size (in LBAs): 1048576 (4GiB) 00:22:25.376 Capacity (in LBAs): 1048576 (4GiB) 00:22:25.376 Utilization (in LBAs): 1048576 (4GiB) 00:22:25.376 Thin Provisioning: Not Supported 00:22:25.376 Per-NS Atomic Units: No 00:22:25.376 Maximum Single Source Range Length: 128 00:22:25.376 Maximum Copy Length: 128 00:22:25.376 Maximum Source Range Count: 128 00:22:25.376 NGUID/EUI64 Never Reused: No 00:22:25.376 Namespace Write Protected: No 00:22:25.376 Number of LBA Formats: 8 00:22:25.376 Current LBA Format: LBA Format #04 00:22:25.376 LBA Format #00: Data Size: 512 Metadata Size: 0 00:22:25.376 LBA Format #01: Data Size: 512 Metadata Size: 8 00:22:25.376 LBA Format #02: Data Size: 512 Metadata Size: 16 00:22:25.376 LBA Format #03: Data Size: 512 Metadata Size: 64 00:22:25.376 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:22:25.376 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:22:25.376 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:22:25.376 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:22:25.376 00:22:25.376 NVM Specific Namespace Data 00:22:25.376 =========================== 00:22:25.376 Logical Block Storage Tag Mask: 0 00:22:25.376 Protection Information Capabilities: 00:22:25.376 16b Guard Protection Information Storage Tag Support: No 00:22:25.376 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:22:25.376 Storage Tag Check Read Support: No 00:22:25.376 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:25.376 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:25.376 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:25.376 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:25.376 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:25.376 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:25.376 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:25.376 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:25.376 23:04:03 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:22:25.376 23:04:03 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:22:25.637 ===================================================== 00:22:25.637 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:22:25.637 ===================================================== 00:22:25.637 Controller Capabilities/Features 00:22:25.637 ================================ 00:22:25.637 Vendor ID: 1b36 00:22:25.637 Subsystem Vendor ID: 1af4 00:22:25.637 Serial Number: 12340 00:22:25.637 Model Number: QEMU NVMe Ctrl 00:22:25.637 Firmware Version: 8.0.0 00:22:25.637 Recommended Arb Burst: 6 00:22:25.637 IEEE OUI Identifier: 00 54 52 00:22:25.637 Multi-path I/O 00:22:25.637 May have multiple subsystem ports: No 00:22:25.637 May have multiple controllers: No 00:22:25.637 Associated with SR-IOV VF: No 00:22:25.637 Max Data Transfer Size: 524288 00:22:25.637 Max Number of Namespaces: 256 00:22:25.637 Max Number of I/O Queues: 64 00:22:25.637 NVMe Specification Version (VS): 1.4 00:22:25.637 NVMe Specification Version (Identify): 1.4 00:22:25.637 Maximum Queue Entries: 2048 00:22:25.637 Contiguous Queues Required: Yes 00:22:25.637 Arbitration Mechanisms Supported 00:22:25.637 Weighted Round Robin: Not Supported 00:22:25.637 Vendor Specific: Not Supported 00:22:25.637 Reset Timeout: 7500 ms 00:22:25.637 Doorbell Stride: 4 bytes 00:22:25.637 NVM Subsystem Reset: Not Supported 00:22:25.637 Command Sets Supported 00:22:25.637 NVM Command Set: Supported 00:22:25.637 Boot Partition: Not Supported 00:22:25.637 Memory Page Size Minimum: 4096 bytes 00:22:25.637 Memory Page Size Maximum: 65536 bytes 00:22:25.637 Persistent Memory Region: Not Supported 00:22:25.637 Optional Asynchronous Events Supported 00:22:25.637 Namespace Attribute Notices: Supported 00:22:25.637 Firmware Activation Notices: Not Supported 00:22:25.637 ANA Change Notices: Not Supported 00:22:25.637 PLE Aggregate Log Change Notices: Not Supported 00:22:25.637 LBA Status Info Alert Notices: Not Supported 00:22:25.637 EGE Aggregate Log Change Notices: Not Supported 00:22:25.637 Normal NVM Subsystem Shutdown event: Not Supported 00:22:25.637 Zone Descriptor Change Notices: Not Supported 00:22:25.637 Discovery Log Change Notices: Not Supported 00:22:25.637 Controller Attributes 00:22:25.637 128-bit Host Identifier: Not Supported 00:22:25.637 Non-Operational Permissive Mode: Not Supported 00:22:25.637 NVM Sets: Not Supported 00:22:25.637 Read Recovery Levels: Not Supported 00:22:25.637 Endurance Groups: Not Supported 00:22:25.637 Predictable Latency Mode: Not Supported 00:22:25.637 Traffic Based Keep ALive: Not Supported 00:22:25.637 Namespace Granularity: Not Supported 00:22:25.637 SQ Associations: Not Supported 00:22:25.637 UUID List: Not Supported 00:22:25.637 Multi-Domain Subsystem: Not Supported 00:22:25.637 Fixed Capacity Management: Not Supported 00:22:25.637 Variable Capacity Management: Not Supported 00:22:25.637 Delete Endurance Group: Not Supported 00:22:25.637 Delete NVM Set: Not Supported 00:22:25.637 Extended LBA Formats Supported: Supported 00:22:25.637 Flexible Data Placement Supported: Not Supported 00:22:25.637 00:22:25.637 Controller Memory Buffer Support 00:22:25.637 ================================ 00:22:25.637 Supported: No 00:22:25.637 00:22:25.637 Persistent Memory Region Support 00:22:25.637 ================================ 00:22:25.637 Supported: No 00:22:25.637 00:22:25.637 Admin Command Set Attributes 00:22:25.637 ============================ 00:22:25.637 Security Send/Receive: Not Supported 00:22:25.637 Format NVM: Supported 00:22:25.637 Firmware Activate/Download: Not Supported 00:22:25.638 Namespace Management: Supported 00:22:25.638 Device Self-Test: Not Supported 00:22:25.638 Directives: Supported 00:22:25.638 NVMe-MI: Not Supported 00:22:25.638 Virtualization Management: Not Supported 00:22:25.638 Doorbell Buffer Config: Supported 00:22:25.638 Get LBA Status Capability: Not Supported 00:22:25.638 Command & Feature Lockdown Capability: Not Supported 00:22:25.638 Abort Command Limit: 4 00:22:25.638 Async Event Request Limit: 4 00:22:25.638 Number of Firmware Slots: N/A 00:22:25.638 Firmware Slot 1 Read-Only: N/A 00:22:25.638 Firmware Activation Without Reset: N/A 00:22:25.638 Multiple Update Detection Support: N/A 00:22:25.638 Firmware Update Granularity: No Information Provided 00:22:25.638 Per-Namespace SMART Log: Yes 00:22:25.638 Asymmetric Namespace Access Log Page: Not Supported 00:22:25.638 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:22:25.638 Command Effects Log Page: Supported 00:22:25.638 Get Log Page Extended Data: Supported 00:22:25.638 Telemetry Log Pages: Not Supported 00:22:25.638 Persistent Event Log Pages: Not Supported 00:22:25.638 Supported Log Pages Log Page: May Support 00:22:25.638 Commands Supported & Effects Log Page: Not Supported 00:22:25.638 Feature Identifiers & Effects Log Page:May Support 00:22:25.638 NVMe-MI Commands & Effects Log Page: May Support 00:22:25.638 Data Area 4 for Telemetry Log: Not Supported 00:22:25.638 Error Log Page Entries Supported: 1 00:22:25.638 Keep Alive: Not Supported 00:22:25.638 00:22:25.638 NVM Command Set Attributes 00:22:25.638 ========================== 00:22:25.638 Submission Queue Entry Size 00:22:25.638 Max: 64 00:22:25.638 Min: 64 00:22:25.638 Completion Queue Entry Size 00:22:25.638 Max: 16 00:22:25.638 Min: 16 00:22:25.638 Number of Namespaces: 256 00:22:25.638 Compare Command: Supported 00:22:25.638 Write Uncorrectable Command: Not Supported 00:22:25.638 Dataset Management Command: Supported 00:22:25.638 Write Zeroes Command: Supported 00:22:25.638 Set Features Save Field: Supported 00:22:25.638 Reservations: Not Supported 00:22:25.638 Timestamp: Supported 00:22:25.638 Copy: Supported 00:22:25.638 Volatile Write Cache: Present 00:22:25.638 Atomic Write Unit (Normal): 1 00:22:25.638 Atomic Write Unit (PFail): 1 00:22:25.638 Atomic Compare & Write Unit: 1 00:22:25.638 Fused Compare & Write: Not Supported 00:22:25.638 Scatter-Gather List 00:22:25.638 SGL Command Set: Supported 00:22:25.638 SGL Keyed: Not Supported 00:22:25.638 SGL Bit Bucket Descriptor: Not Supported 00:22:25.638 SGL Metadata Pointer: Not Supported 00:22:25.638 Oversized SGL: Not Supported 00:22:25.638 SGL Metadata Address: Not Supported 00:22:25.638 SGL Offset: Not Supported 00:22:25.638 Transport SGL Data Block: Not Supported 00:22:25.638 Replay Protected Memory Block: Not Supported 00:22:25.638 00:22:25.638 Firmware Slot Information 00:22:25.638 ========================= 00:22:25.638 Active slot: 1 00:22:25.638 Slot 1 Firmware Revision: 1.0 00:22:25.638 00:22:25.638 00:22:25.638 Commands Supported and Effects 00:22:25.638 ============================== 00:22:25.638 Admin Commands 00:22:25.638 -------------- 00:22:25.638 Delete I/O Submission Queue (00h): Supported 00:22:25.638 Create I/O Submission Queue (01h): Supported 00:22:25.638 Get Log Page (02h): Supported 00:22:25.638 Delete I/O Completion Queue (04h): Supported 00:22:25.638 Create I/O Completion Queue (05h): Supported 00:22:25.638 Identify (06h): Supported 00:22:25.638 Abort (08h): Supported 00:22:25.638 Set Features (09h): Supported 00:22:25.638 Get Features (0Ah): Supported 00:22:25.638 Asynchronous Event Request (0Ch): Supported 00:22:25.638 Namespace Attachment (15h): Supported NS-Inventory-Change 00:22:25.638 Directive Send (19h): Supported 00:22:25.638 Directive Receive (1Ah): Supported 00:22:25.638 Virtualization Management (1Ch): Supported 00:22:25.638 Doorbell Buffer Config (7Ch): Supported 00:22:25.638 Format NVM (80h): Supported LBA-Change 00:22:25.638 I/O Commands 00:22:25.638 ------------ 00:22:25.638 Flush (00h): Supported LBA-Change 00:22:25.638 Write (01h): Supported LBA-Change 00:22:25.638 Read (02h): Supported 00:22:25.638 Compare (05h): Supported 00:22:25.638 Write Zeroes (08h): Supported LBA-Change 00:22:25.638 Dataset Management (09h): Supported LBA-Change 00:22:25.638 Unknown (0Ch): Supported 00:22:25.638 Unknown (12h): Supported 00:22:25.638 Copy (19h): Supported LBA-Change 00:22:25.638 Unknown (1Dh): Supported LBA-Change 00:22:25.638 00:22:25.638 Error Log 00:22:25.638 ========= 00:22:25.638 00:22:25.638 Arbitration 00:22:25.638 =========== 00:22:25.638 Arbitration Burst: no limit 00:22:25.638 00:22:25.638 Power Management 00:22:25.638 ================ 00:22:25.638 Number of Power States: 1 00:22:25.638 Current Power State: Power State #0 00:22:25.638 Power State #0: 00:22:25.638 Max Power: 25.00 W 00:22:25.638 Non-Operational State: Operational 00:22:25.638 Entry Latency: 16 microseconds 00:22:25.638 Exit Latency: 4 microseconds 00:22:25.638 Relative Read Throughput: 0 00:22:25.638 Relative Read Latency: 0 00:22:25.638 Relative Write Throughput: 0 00:22:25.638 Relative Write Latency: 0 00:22:25.638 Idle Power: Not Reported 00:22:25.638 Active Power: Not Reported 00:22:25.638 Non-Operational Permissive Mode: Not Supported 00:22:25.638 00:22:25.638 Health Information 00:22:25.638 ================== 00:22:25.638 Critical Warnings: 00:22:25.638 Available Spare Space: OK 00:22:25.638 Temperature: OK 00:22:25.638 Device Reliability: OK 00:22:25.638 Read Only: No 00:22:25.638 Volatile Memory Backup: OK 00:22:25.638 Current Temperature: 323 Kelvin (50 Celsius) 00:22:25.638 Temperature Threshold: 343 Kelvin (70 Celsius) 00:22:25.638 Available Spare: 0% 00:22:25.638 Available Spare Threshold: 0% 00:22:25.638 Life Percentage Used: 0% 00:22:25.638 Data Units Read: 729 00:22:25.638 Data Units Written: 657 00:22:25.638 Host Read Commands: 29560 00:22:25.638 Host Write Commands: 29346 00:22:25.638 Controller Busy Time: 0 minutes 00:22:25.638 Power Cycles: 0 00:22:25.638 Power On Hours: 0 hours 00:22:25.638 Unsafe Shutdowns: 0 00:22:25.638 Unrecoverable Media Errors: 0 00:22:25.638 Lifetime Error Log Entries: 0 00:22:25.638 Warning Temperature Time: 0 minutes 00:22:25.638 Critical Temperature Time: 0 minutes 00:22:25.638 00:22:25.638 Number of Queues 00:22:25.638 ================ 00:22:25.638 Number of I/O Submission Queues: 64 00:22:25.638 Number of I/O Completion Queues: 64 00:22:25.638 00:22:25.638 ZNS Specific Controller Data 00:22:25.638 ============================ 00:22:25.638 Zone Append Size Limit: 0 00:22:25.638 00:22:25.638 00:22:25.638 Active Namespaces 00:22:25.638 ================= 00:22:25.638 Namespace ID:1 00:22:25.638 Error Recovery Timeout: Unlimited 00:22:25.638 Command Set Identifier: NVM (00h) 00:22:25.638 Deallocate: Supported 00:22:25.638 Deallocated/Unwritten Error: Supported 00:22:25.638 Deallocated Read Value: All 0x00 00:22:25.638 Deallocate in Write Zeroes: Not Supported 00:22:25.638 Deallocated Guard Field: 0xFFFF 00:22:25.638 Flush: Supported 00:22:25.638 Reservation: Not Supported 00:22:25.638 Metadata Transferred as: Separate Metadata Buffer 00:22:25.638 Namespace Sharing Capabilities: Private 00:22:25.638 Size (in LBAs): 1548666 (5GiB) 00:22:25.638 Capacity (in LBAs): 1548666 (5GiB) 00:22:25.638 Utilization (in LBAs): 1548666 (5GiB) 00:22:25.638 Thin Provisioning: Not Supported 00:22:25.638 Per-NS Atomic Units: No 00:22:25.638 Maximum Single Source Range Length: 128 00:22:25.638 Maximum Copy Length: 128 00:22:25.638 Maximum Source Range Count: 128 00:22:25.638 NGUID/EUI64 Never Reused: No 00:22:25.638 Namespace Write Protected: No 00:22:25.638 Number of LBA Formats: 8 00:22:25.638 Current LBA Format: LBA Format #07 00:22:25.638 LBA Format #00: Data Size: 512 Metadata Size: 0 00:22:25.638 LBA Format #01: Data Size: 512 Metadata Size: 8 00:22:25.638 LBA Format #02: Data Size: 512 Metadata Size: 16 00:22:25.638 LBA Format #03: Data Size: 512 Metadata Size: 64 00:22:25.638 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:22:25.638 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:22:25.638 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:22:25.638 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:22:25.638 00:22:25.638 NVM Specific Namespace Data 00:22:25.638 =========================== 00:22:25.638 Logical Block Storage Tag Mask: 0 00:22:25.638 Protection Information Capabilities: 00:22:25.638 16b Guard Protection Information Storage Tag Support: No 00:22:25.638 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:22:25.638 Storage Tag Check Read Support: No 00:22:25.638 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:25.638 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:25.638 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:25.638 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:25.638 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:25.638 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:25.639 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:25.639 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:25.639 23:04:03 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:22:25.639 23:04:03 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:22:25.907 ===================================================== 00:22:25.907 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:22:25.907 ===================================================== 00:22:25.907 Controller Capabilities/Features 00:22:25.907 ================================ 00:22:25.907 Vendor ID: 1b36 00:22:25.907 Subsystem Vendor ID: 1af4 00:22:25.907 Serial Number: 12341 00:22:25.907 Model Number: QEMU NVMe Ctrl 00:22:25.907 Firmware Version: 8.0.0 00:22:25.907 Recommended Arb Burst: 6 00:22:25.907 IEEE OUI Identifier: 00 54 52 00:22:25.907 Multi-path I/O 00:22:25.907 May have multiple subsystem ports: No 00:22:25.907 May have multiple controllers: No 00:22:25.907 Associated with SR-IOV VF: No 00:22:25.907 Max Data Transfer Size: 524288 00:22:25.907 Max Number of Namespaces: 256 00:22:25.907 Max Number of I/O Queues: 64 00:22:25.907 NVMe Specification Version (VS): 1.4 00:22:25.907 NVMe Specification Version (Identify): 1.4 00:22:25.907 Maximum Queue Entries: 2048 00:22:25.907 Contiguous Queues Required: Yes 00:22:25.907 Arbitration Mechanisms Supported 00:22:25.907 Weighted Round Robin: Not Supported 00:22:25.907 Vendor Specific: Not Supported 00:22:25.907 Reset Timeout: 7500 ms 00:22:25.907 Doorbell Stride: 4 bytes 00:22:25.907 NVM Subsystem Reset: Not Supported 00:22:25.907 Command Sets Supported 00:22:25.907 NVM Command Set: Supported 00:22:25.907 Boot Partition: Not Supported 00:22:25.907 Memory Page Size Minimum: 4096 bytes 00:22:25.908 Memory Page Size Maximum: 65536 bytes 00:22:25.908 Persistent Memory Region: Not Supported 00:22:25.908 Optional Asynchronous Events Supported 00:22:25.908 Namespace Attribute Notices: Supported 00:22:25.908 Firmware Activation Notices: Not Supported 00:22:25.908 ANA Change Notices: Not Supported 00:22:25.908 PLE Aggregate Log Change Notices: Not Supported 00:22:25.908 LBA Status Info Alert Notices: Not Supported 00:22:25.908 EGE Aggregate Log Change Notices: Not Supported 00:22:25.908 Normal NVM Subsystem Shutdown event: Not Supported 00:22:25.908 Zone Descriptor Change Notices: Not Supported 00:22:25.908 Discovery Log Change Notices: Not Supported 00:22:25.908 Controller Attributes 00:22:25.908 128-bit Host Identifier: Not Supported 00:22:25.908 Non-Operational Permissive Mode: Not Supported 00:22:25.908 NVM Sets: Not Supported 00:22:25.908 Read Recovery Levels: Not Supported 00:22:25.908 Endurance Groups: Not Supported 00:22:25.908 Predictable Latency Mode: Not Supported 00:22:25.908 Traffic Based Keep ALive: Not Supported 00:22:25.908 Namespace Granularity: Not Supported 00:22:25.908 SQ Associations: Not Supported 00:22:25.908 UUID List: Not Supported 00:22:25.908 Multi-Domain Subsystem: Not Supported 00:22:25.908 Fixed Capacity Management: Not Supported 00:22:25.908 Variable Capacity Management: Not Supported 00:22:25.908 Delete Endurance Group: Not Supported 00:22:25.908 Delete NVM Set: Not Supported 00:22:25.908 Extended LBA Formats Supported: Supported 00:22:25.908 Flexible Data Placement Supported: Not Supported 00:22:25.908 00:22:25.908 Controller Memory Buffer Support 00:22:25.908 ================================ 00:22:25.908 Supported: No 00:22:25.908 00:22:25.908 Persistent Memory Region Support 00:22:25.908 ================================ 00:22:25.908 Supported: No 00:22:25.908 00:22:25.908 Admin Command Set Attributes 00:22:25.908 ============================ 00:22:25.908 Security Send/Receive: Not Supported 00:22:25.908 Format NVM: Supported 00:22:25.908 Firmware Activate/Download: Not Supported 00:22:25.908 Namespace Management: Supported 00:22:25.908 Device Self-Test: Not Supported 00:22:25.908 Directives: Supported 00:22:25.908 NVMe-MI: Not Supported 00:22:25.908 Virtualization Management: Not Supported 00:22:25.908 Doorbell Buffer Config: Supported 00:22:25.908 Get LBA Status Capability: Not Supported 00:22:25.908 Command & Feature Lockdown Capability: Not Supported 00:22:25.908 Abort Command Limit: 4 00:22:25.908 Async Event Request Limit: 4 00:22:25.908 Number of Firmware Slots: N/A 00:22:25.908 Firmware Slot 1 Read-Only: N/A 00:22:25.908 Firmware Activation Without Reset: N/A 00:22:25.908 Multiple Update Detection Support: N/A 00:22:25.908 Firmware Update Granularity: No Information Provided 00:22:25.908 Per-Namespace SMART Log: Yes 00:22:25.908 Asymmetric Namespace Access Log Page: Not Supported 00:22:25.908 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:22:25.908 Command Effects Log Page: Supported 00:22:25.908 Get Log Page Extended Data: Supported 00:22:25.908 Telemetry Log Pages: Not Supported 00:22:25.908 Persistent Event Log Pages: Not Supported 00:22:25.908 Supported Log Pages Log Page: May Support 00:22:25.908 Commands Supported & Effects Log Page: Not Supported 00:22:25.908 Feature Identifiers & Effects Log Page:May Support 00:22:25.908 NVMe-MI Commands & Effects Log Page: May Support 00:22:25.908 Data Area 4 for Telemetry Log: Not Supported 00:22:25.908 Error Log Page Entries Supported: 1 00:22:25.908 Keep Alive: Not Supported 00:22:25.908 00:22:25.908 NVM Command Set Attributes 00:22:25.908 ========================== 00:22:25.908 Submission Queue Entry Size 00:22:25.908 Max: 64 00:22:25.908 Min: 64 00:22:25.908 Completion Queue Entry Size 00:22:25.908 Max: 16 00:22:25.908 Min: 16 00:22:25.908 Number of Namespaces: 256 00:22:25.908 Compare Command: Supported 00:22:25.908 Write Uncorrectable Command: Not Supported 00:22:25.908 Dataset Management Command: Supported 00:22:25.908 Write Zeroes Command: Supported 00:22:25.908 Set Features Save Field: Supported 00:22:25.908 Reservations: Not Supported 00:22:25.908 Timestamp: Supported 00:22:25.908 Copy: Supported 00:22:25.908 Volatile Write Cache: Present 00:22:25.908 Atomic Write Unit (Normal): 1 00:22:25.908 Atomic Write Unit (PFail): 1 00:22:25.908 Atomic Compare & Write Unit: 1 00:22:25.908 Fused Compare & Write: Not Supported 00:22:25.908 Scatter-Gather List 00:22:25.908 SGL Command Set: Supported 00:22:25.908 SGL Keyed: Not Supported 00:22:25.908 SGL Bit Bucket Descriptor: Not Supported 00:22:25.908 SGL Metadata Pointer: Not Supported 00:22:25.908 Oversized SGL: Not Supported 00:22:25.908 SGL Metadata Address: Not Supported 00:22:25.908 SGL Offset: Not Supported 00:22:25.908 Transport SGL Data Block: Not Supported 00:22:25.908 Replay Protected Memory Block: Not Supported 00:22:25.908 00:22:25.908 Firmware Slot Information 00:22:25.908 ========================= 00:22:25.908 Active slot: 1 00:22:25.908 Slot 1 Firmware Revision: 1.0 00:22:25.908 00:22:25.908 00:22:25.908 Commands Supported and Effects 00:22:25.908 ============================== 00:22:25.908 Admin Commands 00:22:25.908 -------------- 00:22:25.908 Delete I/O Submission Queue (00h): Supported 00:22:25.908 Create I/O Submission Queue (01h): Supported 00:22:25.908 Get Log Page (02h): Supported 00:22:25.908 Delete I/O Completion Queue (04h): Supported 00:22:25.908 Create I/O Completion Queue (05h): Supported 00:22:25.908 Identify (06h): Supported 00:22:25.908 Abort (08h): Supported 00:22:25.908 Set Features (09h): Supported 00:22:25.908 Get Features (0Ah): Supported 00:22:25.908 Asynchronous Event Request (0Ch): Supported 00:22:25.908 Namespace Attachment (15h): Supported NS-Inventory-Change 00:22:25.908 Directive Send (19h): Supported 00:22:25.908 Directive Receive (1Ah): Supported 00:22:25.908 Virtualization Management (1Ch): Supported 00:22:25.908 Doorbell Buffer Config (7Ch): Supported 00:22:25.908 Format NVM (80h): Supported LBA-Change 00:22:25.908 I/O Commands 00:22:25.908 ------------ 00:22:25.908 Flush (00h): Supported LBA-Change 00:22:25.908 Write (01h): Supported LBA-Change 00:22:25.908 Read (02h): Supported 00:22:25.908 Compare (05h): Supported 00:22:25.908 Write Zeroes (08h): Supported LBA-Change 00:22:25.908 Dataset Management (09h): Supported LBA-Change 00:22:25.908 Unknown (0Ch): Supported 00:22:25.908 Unknown (12h): Supported 00:22:25.908 Copy (19h): Supported LBA-Change 00:22:25.908 Unknown (1Dh): Supported LBA-Change 00:22:25.908 00:22:25.908 Error Log 00:22:25.908 ========= 00:22:25.908 00:22:25.908 Arbitration 00:22:25.908 =========== 00:22:25.908 Arbitration Burst: no limit 00:22:25.908 00:22:25.908 Power Management 00:22:25.908 ================ 00:22:25.908 Number of Power States: 1 00:22:25.908 Current Power State: Power State #0 00:22:25.908 Power State #0: 00:22:25.908 Max Power: 25.00 W 00:22:25.908 Non-Operational State: Operational 00:22:25.908 Entry Latency: 16 microseconds 00:22:25.908 Exit Latency: 4 microseconds 00:22:25.908 Relative Read Throughput: 0 00:22:25.908 Relative Read Latency: 0 00:22:25.908 Relative Write Throughput: 0 00:22:25.908 Relative Write Latency: 0 00:22:25.908 Idle Power: Not Reported 00:22:25.908 Active Power: Not Reported 00:22:25.908 Non-Operational Permissive Mode: Not Supported 00:22:25.908 00:22:25.908 Health Information 00:22:25.908 ================== 00:22:25.908 Critical Warnings: 00:22:25.908 Available Spare Space: OK 00:22:25.908 Temperature: OK 00:22:25.908 Device Reliability: OK 00:22:25.908 Read Only: No 00:22:25.908 Volatile Memory Backup: OK 00:22:25.908 Current Temperature: 323 Kelvin (50 Celsius) 00:22:25.908 Temperature Threshold: 343 Kelvin (70 Celsius) 00:22:25.908 Available Spare: 0% 00:22:25.908 Available Spare Threshold: 0% 00:22:25.908 Life Percentage Used: 0% 00:22:25.908 Data Units Read: 1142 00:22:25.908 Data Units Written: 1015 00:22:25.908 Host Read Commands: 44948 00:22:25.908 Host Write Commands: 43851 00:22:25.908 Controller Busy Time: 0 minutes 00:22:25.908 Power Cycles: 0 00:22:25.908 Power On Hours: 0 hours 00:22:25.908 Unsafe Shutdowns: 0 00:22:25.908 Unrecoverable Media Errors: 0 00:22:25.908 Lifetime Error Log Entries: 0 00:22:25.908 Warning Temperature Time: 0 minutes 00:22:25.908 Critical Temperature Time: 0 minutes 00:22:25.908 00:22:25.908 Number of Queues 00:22:25.908 ================ 00:22:25.908 Number of I/O Submission Queues: 64 00:22:25.908 Number of I/O Completion Queues: 64 00:22:25.908 00:22:25.908 ZNS Specific Controller Data 00:22:25.908 ============================ 00:22:25.908 Zone Append Size Limit: 0 00:22:25.908 00:22:25.908 00:22:25.908 Active Namespaces 00:22:25.908 ================= 00:22:25.908 Namespace ID:1 00:22:25.908 Error Recovery Timeout: Unlimited 00:22:25.908 Command Set Identifier: NVM (00h) 00:22:25.908 Deallocate: Supported 00:22:25.908 Deallocated/Unwritten Error: Supported 00:22:25.908 Deallocated Read Value: All 0x00 00:22:25.908 Deallocate in Write Zeroes: Not Supported 00:22:25.908 Deallocated Guard Field: 0xFFFF 00:22:25.908 Flush: Supported 00:22:25.908 Reservation: Not Supported 00:22:25.908 Namespace Sharing Capabilities: Private 00:22:25.908 Size (in LBAs): 1310720 (5GiB) 00:22:25.908 Capacity (in LBAs): 1310720 (5GiB) 00:22:25.908 Utilization (in LBAs): 1310720 (5GiB) 00:22:25.908 Thin Provisioning: Not Supported 00:22:25.908 Per-NS Atomic Units: No 00:22:25.908 Maximum Single Source Range Length: 128 00:22:25.908 Maximum Copy Length: 128 00:22:25.908 Maximum Source Range Count: 128 00:22:25.908 NGUID/EUI64 Never Reused: No 00:22:25.908 Namespace Write Protected: No 00:22:25.908 Number of LBA Formats: 8 00:22:25.908 Current LBA Format: LBA Format #04 00:22:25.908 LBA Format #00: Data Size: 512 Metadata Size: 0 00:22:25.908 LBA Format #01: Data Size: 512 Metadata Size: 8 00:22:25.908 LBA Format #02: Data Size: 512 Metadata Size: 16 00:22:25.908 LBA Format #03: Data Size: 512 Metadata Size: 64 00:22:25.908 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:22:25.908 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:22:25.908 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:22:25.908 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:22:25.908 00:22:25.908 NVM Specific Namespace Data 00:22:25.908 =========================== 00:22:25.908 Logical Block Storage Tag Mask: 0 00:22:25.908 Protection Information Capabilities: 00:22:25.908 16b Guard Protection Information Storage Tag Support: No 00:22:25.908 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:22:25.908 Storage Tag Check Read Support: No 00:22:25.908 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:25.908 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:25.908 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:25.908 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:25.908 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:25.908 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:25.908 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:25.908 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:25.908 23:04:04 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:22:25.908 23:04:04 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:22:26.171 ===================================================== 00:22:26.171 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:22:26.171 ===================================================== 00:22:26.171 Controller Capabilities/Features 00:22:26.171 ================================ 00:22:26.171 Vendor ID: 1b36 00:22:26.171 Subsystem Vendor ID: 1af4 00:22:26.171 Serial Number: 12342 00:22:26.171 Model Number: QEMU NVMe Ctrl 00:22:26.171 Firmware Version: 8.0.0 00:22:26.171 Recommended Arb Burst: 6 00:22:26.171 IEEE OUI Identifier: 00 54 52 00:22:26.171 Multi-path I/O 00:22:26.171 May have multiple subsystem ports: No 00:22:26.171 May have multiple controllers: No 00:22:26.171 Associated with SR-IOV VF: No 00:22:26.171 Max Data Transfer Size: 524288 00:22:26.171 Max Number of Namespaces: 256 00:22:26.171 Max Number of I/O Queues: 64 00:22:26.171 NVMe Specification Version (VS): 1.4 00:22:26.171 NVMe Specification Version (Identify): 1.4 00:22:26.171 Maximum Queue Entries: 2048 00:22:26.171 Contiguous Queues Required: Yes 00:22:26.171 Arbitration Mechanisms Supported 00:22:26.171 Weighted Round Robin: Not Supported 00:22:26.171 Vendor Specific: Not Supported 00:22:26.171 Reset Timeout: 7500 ms 00:22:26.171 Doorbell Stride: 4 bytes 00:22:26.171 NVM Subsystem Reset: Not Supported 00:22:26.171 Command Sets Supported 00:22:26.171 NVM Command Set: Supported 00:22:26.171 Boot Partition: Not Supported 00:22:26.171 Memory Page Size Minimum: 4096 bytes 00:22:26.171 Memory Page Size Maximum: 65536 bytes 00:22:26.171 Persistent Memory Region: Not Supported 00:22:26.171 Optional Asynchronous Events Supported 00:22:26.171 Namespace Attribute Notices: Supported 00:22:26.171 Firmware Activation Notices: Not Supported 00:22:26.171 ANA Change Notices: Not Supported 00:22:26.171 PLE Aggregate Log Change Notices: Not Supported 00:22:26.171 LBA Status Info Alert Notices: Not Supported 00:22:26.171 EGE Aggregate Log Change Notices: Not Supported 00:22:26.171 Normal NVM Subsystem Shutdown event: Not Supported 00:22:26.171 Zone Descriptor Change Notices: Not Supported 00:22:26.171 Discovery Log Change Notices: Not Supported 00:22:26.171 Controller Attributes 00:22:26.171 128-bit Host Identifier: Not Supported 00:22:26.171 Non-Operational Permissive Mode: Not Supported 00:22:26.171 NVM Sets: Not Supported 00:22:26.171 Read Recovery Levels: Not Supported 00:22:26.171 Endurance Groups: Not Supported 00:22:26.171 Predictable Latency Mode: Not Supported 00:22:26.171 Traffic Based Keep ALive: Not Supported 00:22:26.171 Namespace Granularity: Not Supported 00:22:26.171 SQ Associations: Not Supported 00:22:26.171 UUID List: Not Supported 00:22:26.171 Multi-Domain Subsystem: Not Supported 00:22:26.171 Fixed Capacity Management: Not Supported 00:22:26.171 Variable Capacity Management: Not Supported 00:22:26.171 Delete Endurance Group: Not Supported 00:22:26.171 Delete NVM Set: Not Supported 00:22:26.171 Extended LBA Formats Supported: Supported 00:22:26.171 Flexible Data Placement Supported: Not Supported 00:22:26.171 00:22:26.171 Controller Memory Buffer Support 00:22:26.171 ================================ 00:22:26.171 Supported: No 00:22:26.171 00:22:26.171 Persistent Memory Region Support 00:22:26.171 ================================ 00:22:26.171 Supported: No 00:22:26.171 00:22:26.171 Admin Command Set Attributes 00:22:26.171 ============================ 00:22:26.171 Security Send/Receive: Not Supported 00:22:26.171 Format NVM: Supported 00:22:26.171 Firmware Activate/Download: Not Supported 00:22:26.171 Namespace Management: Supported 00:22:26.171 Device Self-Test: Not Supported 00:22:26.171 Directives: Supported 00:22:26.171 NVMe-MI: Not Supported 00:22:26.171 Virtualization Management: Not Supported 00:22:26.171 Doorbell Buffer Config: Supported 00:22:26.171 Get LBA Status Capability: Not Supported 00:22:26.171 Command & Feature Lockdown Capability: Not Supported 00:22:26.171 Abort Command Limit: 4 00:22:26.171 Async Event Request Limit: 4 00:22:26.171 Number of Firmware Slots: N/A 00:22:26.171 Firmware Slot 1 Read-Only: N/A 00:22:26.171 Firmware Activation Without Reset: N/A 00:22:26.171 Multiple Update Detection Support: N/A 00:22:26.171 Firmware Update Granularity: No Information Provided 00:22:26.171 Per-Namespace SMART Log: Yes 00:22:26.171 Asymmetric Namespace Access Log Page: Not Supported 00:22:26.171 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:22:26.171 Command Effects Log Page: Supported 00:22:26.171 Get Log Page Extended Data: Supported 00:22:26.171 Telemetry Log Pages: Not Supported 00:22:26.171 Persistent Event Log Pages: Not Supported 00:22:26.171 Supported Log Pages Log Page: May Support 00:22:26.171 Commands Supported & Effects Log Page: Not Supported 00:22:26.171 Feature Identifiers & Effects Log Page:May Support 00:22:26.171 NVMe-MI Commands & Effects Log Page: May Support 00:22:26.171 Data Area 4 for Telemetry Log: Not Supported 00:22:26.171 Error Log Page Entries Supported: 1 00:22:26.171 Keep Alive: Not Supported 00:22:26.171 00:22:26.171 NVM Command Set Attributes 00:22:26.171 ========================== 00:22:26.171 Submission Queue Entry Size 00:22:26.171 Max: 64 00:22:26.171 Min: 64 00:22:26.171 Completion Queue Entry Size 00:22:26.171 Max: 16 00:22:26.171 Min: 16 00:22:26.171 Number of Namespaces: 256 00:22:26.171 Compare Command: Supported 00:22:26.171 Write Uncorrectable Command: Not Supported 00:22:26.171 Dataset Management Command: Supported 00:22:26.171 Write Zeroes Command: Supported 00:22:26.171 Set Features Save Field: Supported 00:22:26.171 Reservations: Not Supported 00:22:26.171 Timestamp: Supported 00:22:26.171 Copy: Supported 00:22:26.171 Volatile Write Cache: Present 00:22:26.171 Atomic Write Unit (Normal): 1 00:22:26.171 Atomic Write Unit (PFail): 1 00:22:26.171 Atomic Compare & Write Unit: 1 00:22:26.171 Fused Compare & Write: Not Supported 00:22:26.171 Scatter-Gather List 00:22:26.171 SGL Command Set: Supported 00:22:26.171 SGL Keyed: Not Supported 00:22:26.171 SGL Bit Bucket Descriptor: Not Supported 00:22:26.171 SGL Metadata Pointer: Not Supported 00:22:26.171 Oversized SGL: Not Supported 00:22:26.171 SGL Metadata Address: Not Supported 00:22:26.171 SGL Offset: Not Supported 00:22:26.171 Transport SGL Data Block: Not Supported 00:22:26.171 Replay Protected Memory Block: Not Supported 00:22:26.171 00:22:26.171 Firmware Slot Information 00:22:26.171 ========================= 00:22:26.171 Active slot: 1 00:22:26.172 Slot 1 Firmware Revision: 1.0 00:22:26.172 00:22:26.172 00:22:26.172 Commands Supported and Effects 00:22:26.172 ============================== 00:22:26.172 Admin Commands 00:22:26.172 -------------- 00:22:26.172 Delete I/O Submission Queue (00h): Supported 00:22:26.172 Create I/O Submission Queue (01h): Supported 00:22:26.172 Get Log Page (02h): Supported 00:22:26.172 Delete I/O Completion Queue (04h): Supported 00:22:26.172 Create I/O Completion Queue (05h): Supported 00:22:26.172 Identify (06h): Supported 00:22:26.172 Abort (08h): Supported 00:22:26.172 Set Features (09h): Supported 00:22:26.172 Get Features (0Ah): Supported 00:22:26.172 Asynchronous Event Request (0Ch): Supported 00:22:26.172 Namespace Attachment (15h): Supported NS-Inventory-Change 00:22:26.172 Directive Send (19h): Supported 00:22:26.172 Directive Receive (1Ah): Supported 00:22:26.172 Virtualization Management (1Ch): Supported 00:22:26.172 Doorbell Buffer Config (7Ch): Supported 00:22:26.172 Format NVM (80h): Supported LBA-Change 00:22:26.172 I/O Commands 00:22:26.172 ------------ 00:22:26.172 Flush (00h): Supported LBA-Change 00:22:26.172 Write (01h): Supported LBA-Change 00:22:26.172 Read (02h): Supported 00:22:26.172 Compare (05h): Supported 00:22:26.172 Write Zeroes (08h): Supported LBA-Change 00:22:26.172 Dataset Management (09h): Supported LBA-Change 00:22:26.172 Unknown (0Ch): Supported 00:22:26.172 Unknown (12h): Supported 00:22:26.172 Copy (19h): Supported LBA-Change 00:22:26.172 Unknown (1Dh): Supported LBA-Change 00:22:26.172 00:22:26.172 Error Log 00:22:26.172 ========= 00:22:26.172 00:22:26.172 Arbitration 00:22:26.172 =========== 00:22:26.172 Arbitration Burst: no limit 00:22:26.172 00:22:26.172 Power Management 00:22:26.172 ================ 00:22:26.172 Number of Power States: 1 00:22:26.172 Current Power State: Power State #0 00:22:26.172 Power State #0: 00:22:26.172 Max Power: 25.00 W 00:22:26.172 Non-Operational State: Operational 00:22:26.172 Entry Latency: 16 microseconds 00:22:26.172 Exit Latency: 4 microseconds 00:22:26.172 Relative Read Throughput: 0 00:22:26.172 Relative Read Latency: 0 00:22:26.172 Relative Write Throughput: 0 00:22:26.172 Relative Write Latency: 0 00:22:26.172 Idle Power: Not Reported 00:22:26.172 Active Power: Not Reported 00:22:26.172 Non-Operational Permissive Mode: Not Supported 00:22:26.172 00:22:26.172 Health Information 00:22:26.172 ================== 00:22:26.172 Critical Warnings: 00:22:26.172 Available Spare Space: OK 00:22:26.172 Temperature: OK 00:22:26.172 Device Reliability: OK 00:22:26.172 Read Only: No 00:22:26.172 Volatile Memory Backup: OK 00:22:26.172 Current Temperature: 323 Kelvin (50 Celsius) 00:22:26.172 Temperature Threshold: 343 Kelvin (70 Celsius) 00:22:26.172 Available Spare: 0% 00:22:26.172 Available Spare Threshold: 0% 00:22:26.172 Life Percentage Used: 0% 00:22:26.172 Data Units Read: 2255 00:22:26.172 Data Units Written: 2042 00:22:26.172 Host Read Commands: 90009 00:22:26.172 Host Write Commands: 88278 00:22:26.172 Controller Busy Time: 0 minutes 00:22:26.172 Power Cycles: 0 00:22:26.172 Power On Hours: 0 hours 00:22:26.172 Unsafe Shutdowns: 0 00:22:26.172 Unrecoverable Media Errors: 0 00:22:26.172 Lifetime Error Log Entries: 0 00:22:26.172 Warning Temperature Time: 0 minutes 00:22:26.172 Critical Temperature Time: 0 minutes 00:22:26.172 00:22:26.172 Number of Queues 00:22:26.172 ================ 00:22:26.172 Number of I/O Submission Queues: 64 00:22:26.172 Number of I/O Completion Queues: 64 00:22:26.172 00:22:26.172 ZNS Specific Controller Data 00:22:26.172 ============================ 00:22:26.172 Zone Append Size Limit: 0 00:22:26.172 00:22:26.172 00:22:26.172 Active Namespaces 00:22:26.172 ================= 00:22:26.172 Namespace ID:1 00:22:26.172 Error Recovery Timeout: Unlimited 00:22:26.172 Command Set Identifier: NVM (00h) 00:22:26.172 Deallocate: Supported 00:22:26.172 Deallocated/Unwritten Error: Supported 00:22:26.172 Deallocated Read Value: All 0x00 00:22:26.172 Deallocate in Write Zeroes: Not Supported 00:22:26.172 Deallocated Guard Field: 0xFFFF 00:22:26.172 Flush: Supported 00:22:26.172 Reservation: Not Supported 00:22:26.172 Namespace Sharing Capabilities: Private 00:22:26.172 Size (in LBAs): 1048576 (4GiB) 00:22:26.172 Capacity (in LBAs): 1048576 (4GiB) 00:22:26.172 Utilization (in LBAs): 1048576 (4GiB) 00:22:26.172 Thin Provisioning: Not Supported 00:22:26.172 Per-NS Atomic Units: No 00:22:26.172 Maximum Single Source Range Length: 128 00:22:26.172 Maximum Copy Length: 128 00:22:26.172 Maximum Source Range Count: 128 00:22:26.172 NGUID/EUI64 Never Reused: No 00:22:26.172 Namespace Write Protected: No 00:22:26.172 Number of LBA Formats: 8 00:22:26.172 Current LBA Format: LBA Format #04 00:22:26.172 LBA Format #00: Data Size: 512 Metadata Size: 0 00:22:26.172 LBA Format #01: Data Size: 512 Metadata Size: 8 00:22:26.172 LBA Format #02: Data Size: 512 Metadata Size: 16 00:22:26.172 LBA Format #03: Data Size: 512 Metadata Size: 64 00:22:26.172 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:22:26.172 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:22:26.172 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:22:26.172 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:22:26.172 00:22:26.172 NVM Specific Namespace Data 00:22:26.172 =========================== 00:22:26.172 Logical Block Storage Tag Mask: 0 00:22:26.172 Protection Information Capabilities: 00:22:26.172 16b Guard Protection Information Storage Tag Support: No 00:22:26.172 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:22:26.172 Storage Tag Check Read Support: No 00:22:26.172 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:26.172 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:26.172 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:26.172 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:26.172 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:26.172 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:26.172 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:26.172 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:26.172 Namespace ID:2 00:22:26.172 Error Recovery Timeout: Unlimited 00:22:26.172 Command Set Identifier: NVM (00h) 00:22:26.172 Deallocate: Supported 00:22:26.172 Deallocated/Unwritten Error: Supported 00:22:26.172 Deallocated Read Value: All 0x00 00:22:26.172 Deallocate in Write Zeroes: Not Supported 00:22:26.172 Deallocated Guard Field: 0xFFFF 00:22:26.172 Flush: Supported 00:22:26.172 Reservation: Not Supported 00:22:26.172 Namespace Sharing Capabilities: Private 00:22:26.172 Size (in LBAs): 1048576 (4GiB) 00:22:26.172 Capacity (in LBAs): 1048576 (4GiB) 00:22:26.172 Utilization (in LBAs): 1048576 (4GiB) 00:22:26.172 Thin Provisioning: Not Supported 00:22:26.172 Per-NS Atomic Units: No 00:22:26.172 Maximum Single Source Range Length: 128 00:22:26.172 Maximum Copy Length: 128 00:22:26.172 Maximum Source Range Count: 128 00:22:26.172 NGUID/EUI64 Never Reused: No 00:22:26.172 Namespace Write Protected: No 00:22:26.172 Number of LBA Formats: 8 00:22:26.172 Current LBA Format: LBA Format #04 00:22:26.172 LBA Format #00: Data Size: 512 Metadata Size: 0 00:22:26.172 LBA Format #01: Data Size: 512 Metadata Size: 8 00:22:26.172 LBA Format #02: Data Size: 512 Metadata Size: 16 00:22:26.172 LBA Format #03: Data Size: 512 Metadata Size: 64 00:22:26.172 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:22:26.172 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:22:26.172 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:22:26.172 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:22:26.172 00:22:26.172 NVM Specific Namespace Data 00:22:26.172 =========================== 00:22:26.172 Logical Block Storage Tag Mask: 0 00:22:26.172 Protection Information Capabilities: 00:22:26.172 16b Guard Protection Information Storage Tag Support: No 00:22:26.172 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:22:26.172 Storage Tag Check Read Support: No 00:22:26.172 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:26.172 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:26.172 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:26.172 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:26.172 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:26.172 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:26.172 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:26.172 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:26.172 Namespace ID:3 00:22:26.172 Error Recovery Timeout: Unlimited 00:22:26.172 Command Set Identifier: NVM (00h) 00:22:26.172 Deallocate: Supported 00:22:26.172 Deallocated/Unwritten Error: Supported 00:22:26.172 Deallocated Read Value: All 0x00 00:22:26.172 Deallocate in Write Zeroes: Not Supported 00:22:26.173 Deallocated Guard Field: 0xFFFF 00:22:26.173 Flush: Supported 00:22:26.173 Reservation: Not Supported 00:22:26.173 Namespace Sharing Capabilities: Private 00:22:26.173 Size (in LBAs): 1048576 (4GiB) 00:22:26.173 Capacity (in LBAs): 1048576 (4GiB) 00:22:26.173 Utilization (in LBAs): 1048576 (4GiB) 00:22:26.173 Thin Provisioning: Not Supported 00:22:26.173 Per-NS Atomic Units: No 00:22:26.173 Maximum Single Source Range Length: 128 00:22:26.173 Maximum Copy Length: 128 00:22:26.173 Maximum Source Range Count: 128 00:22:26.173 NGUID/EUI64 Never Reused: No 00:22:26.173 Namespace Write Protected: No 00:22:26.173 Number of LBA Formats: 8 00:22:26.173 Current LBA Format: LBA Format #04 00:22:26.173 LBA Format #00: Data Size: 512 Metadata Size: 0 00:22:26.173 LBA Format #01: Data Size: 512 Metadata Size: 8 00:22:26.173 LBA Format #02: Data Size: 512 Metadata Size: 16 00:22:26.173 LBA Format #03: Data Size: 512 Metadata Size: 64 00:22:26.173 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:22:26.173 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:22:26.173 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:22:26.173 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:22:26.173 00:22:26.173 NVM Specific Namespace Data 00:22:26.173 =========================== 00:22:26.173 Logical Block Storage Tag Mask: 0 00:22:26.173 Protection Information Capabilities: 00:22:26.173 16b Guard Protection Information Storage Tag Support: No 00:22:26.173 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:22:26.173 Storage Tag Check Read Support: No 00:22:26.173 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:26.173 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:26.173 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:26.173 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:26.173 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:26.173 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:26.173 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:26.173 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:26.173 23:04:04 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:22:26.173 23:04:04 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:22:26.435 ===================================================== 00:22:26.435 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:22:26.435 ===================================================== 00:22:26.435 Controller Capabilities/Features 00:22:26.435 ================================ 00:22:26.435 Vendor ID: 1b36 00:22:26.435 Subsystem Vendor ID: 1af4 00:22:26.435 Serial Number: 12343 00:22:26.435 Model Number: QEMU NVMe Ctrl 00:22:26.435 Firmware Version: 8.0.0 00:22:26.435 Recommended Arb Burst: 6 00:22:26.435 IEEE OUI Identifier: 00 54 52 00:22:26.435 Multi-path I/O 00:22:26.435 May have multiple subsystem ports: No 00:22:26.435 May have multiple controllers: Yes 00:22:26.435 Associated with SR-IOV VF: No 00:22:26.435 Max Data Transfer Size: 524288 00:22:26.435 Max Number of Namespaces: 256 00:22:26.435 Max Number of I/O Queues: 64 00:22:26.435 NVMe Specification Version (VS): 1.4 00:22:26.435 NVMe Specification Version (Identify): 1.4 00:22:26.435 Maximum Queue Entries: 2048 00:22:26.435 Contiguous Queues Required: Yes 00:22:26.435 Arbitration Mechanisms Supported 00:22:26.435 Weighted Round Robin: Not Supported 00:22:26.435 Vendor Specific: Not Supported 00:22:26.435 Reset Timeout: 7500 ms 00:22:26.435 Doorbell Stride: 4 bytes 00:22:26.435 NVM Subsystem Reset: Not Supported 00:22:26.435 Command Sets Supported 00:22:26.435 NVM Command Set: Supported 00:22:26.435 Boot Partition: Not Supported 00:22:26.435 Memory Page Size Minimum: 4096 bytes 00:22:26.435 Memory Page Size Maximum: 65536 bytes 00:22:26.435 Persistent Memory Region: Not Supported 00:22:26.435 Optional Asynchronous Events Supported 00:22:26.435 Namespace Attribute Notices: Supported 00:22:26.435 Firmware Activation Notices: Not Supported 00:22:26.435 ANA Change Notices: Not Supported 00:22:26.435 PLE Aggregate Log Change Notices: Not Supported 00:22:26.435 LBA Status Info Alert Notices: Not Supported 00:22:26.435 EGE Aggregate Log Change Notices: Not Supported 00:22:26.435 Normal NVM Subsystem Shutdown event: Not Supported 00:22:26.435 Zone Descriptor Change Notices: Not Supported 00:22:26.435 Discovery Log Change Notices: Not Supported 00:22:26.435 Controller Attributes 00:22:26.435 128-bit Host Identifier: Not Supported 00:22:26.435 Non-Operational Permissive Mode: Not Supported 00:22:26.435 NVM Sets: Not Supported 00:22:26.435 Read Recovery Levels: Not Supported 00:22:26.435 Endurance Groups: Supported 00:22:26.435 Predictable Latency Mode: Not Supported 00:22:26.435 Traffic Based Keep ALive: Not Supported 00:22:26.435 Namespace Granularity: Not Supported 00:22:26.435 SQ Associations: Not Supported 00:22:26.435 UUID List: Not Supported 00:22:26.435 Multi-Domain Subsystem: Not Supported 00:22:26.435 Fixed Capacity Management: Not Supported 00:22:26.435 Variable Capacity Management: Not Supported 00:22:26.435 Delete Endurance Group: Not Supported 00:22:26.435 Delete NVM Set: Not Supported 00:22:26.435 Extended LBA Formats Supported: Supported 00:22:26.435 Flexible Data Placement Supported: Supported 00:22:26.435 00:22:26.435 Controller Memory Buffer Support 00:22:26.435 ================================ 00:22:26.435 Supported: No 00:22:26.435 00:22:26.435 Persistent Memory Region Support 00:22:26.435 ================================ 00:22:26.435 Supported: No 00:22:26.435 00:22:26.435 Admin Command Set Attributes 00:22:26.435 ============================ 00:22:26.435 Security Send/Receive: Not Supported 00:22:26.435 Format NVM: Supported 00:22:26.435 Firmware Activate/Download: Not Supported 00:22:26.435 Namespace Management: Supported 00:22:26.435 Device Self-Test: Not Supported 00:22:26.435 Directives: Supported 00:22:26.435 NVMe-MI: Not Supported 00:22:26.435 Virtualization Management: Not Supported 00:22:26.435 Doorbell Buffer Config: Supported 00:22:26.435 Get LBA Status Capability: Not Supported 00:22:26.435 Command & Feature Lockdown Capability: Not Supported 00:22:26.435 Abort Command Limit: 4 00:22:26.435 Async Event Request Limit: 4 00:22:26.435 Number of Firmware Slots: N/A 00:22:26.435 Firmware Slot 1 Read-Only: N/A 00:22:26.435 Firmware Activation Without Reset: N/A 00:22:26.435 Multiple Update Detection Support: N/A 00:22:26.435 Firmware Update Granularity: No Information Provided 00:22:26.435 Per-Namespace SMART Log: Yes 00:22:26.435 Asymmetric Namespace Access Log Page: Not Supported 00:22:26.435 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:22:26.435 Command Effects Log Page: Supported 00:22:26.435 Get Log Page Extended Data: Supported 00:22:26.435 Telemetry Log Pages: Not Supported 00:22:26.435 Persistent Event Log Pages: Not Supported 00:22:26.435 Supported Log Pages Log Page: May Support 00:22:26.435 Commands Supported & Effects Log Page: Not Supported 00:22:26.435 Feature Identifiers & Effects Log Page:May Support 00:22:26.435 NVMe-MI Commands & Effects Log Page: May Support 00:22:26.435 Data Area 4 for Telemetry Log: Not Supported 00:22:26.435 Error Log Page Entries Supported: 1 00:22:26.435 Keep Alive: Not Supported 00:22:26.435 00:22:26.435 NVM Command Set Attributes 00:22:26.435 ========================== 00:22:26.435 Submission Queue Entry Size 00:22:26.435 Max: 64 00:22:26.435 Min: 64 00:22:26.436 Completion Queue Entry Size 00:22:26.436 Max: 16 00:22:26.436 Min: 16 00:22:26.436 Number of Namespaces: 256 00:22:26.436 Compare Command: Supported 00:22:26.436 Write Uncorrectable Command: Not Supported 00:22:26.436 Dataset Management Command: Supported 00:22:26.436 Write Zeroes Command: Supported 00:22:26.436 Set Features Save Field: Supported 00:22:26.436 Reservations: Not Supported 00:22:26.436 Timestamp: Supported 00:22:26.436 Copy: Supported 00:22:26.436 Volatile Write Cache: Present 00:22:26.436 Atomic Write Unit (Normal): 1 00:22:26.436 Atomic Write Unit (PFail): 1 00:22:26.436 Atomic Compare & Write Unit: 1 00:22:26.436 Fused Compare & Write: Not Supported 00:22:26.436 Scatter-Gather List 00:22:26.436 SGL Command Set: Supported 00:22:26.436 SGL Keyed: Not Supported 00:22:26.436 SGL Bit Bucket Descriptor: Not Supported 00:22:26.436 SGL Metadata Pointer: Not Supported 00:22:26.436 Oversized SGL: Not Supported 00:22:26.436 SGL Metadata Address: Not Supported 00:22:26.436 SGL Offset: Not Supported 00:22:26.436 Transport SGL Data Block: Not Supported 00:22:26.436 Replay Protected Memory Block: Not Supported 00:22:26.436 00:22:26.436 Firmware Slot Information 00:22:26.436 ========================= 00:22:26.436 Active slot: 1 00:22:26.436 Slot 1 Firmware Revision: 1.0 00:22:26.436 00:22:26.436 00:22:26.436 Commands Supported and Effects 00:22:26.436 ============================== 00:22:26.436 Admin Commands 00:22:26.436 -------------- 00:22:26.436 Delete I/O Submission Queue (00h): Supported 00:22:26.436 Create I/O Submission Queue (01h): Supported 00:22:26.436 Get Log Page (02h): Supported 00:22:26.436 Delete I/O Completion Queue (04h): Supported 00:22:26.436 Create I/O Completion Queue (05h): Supported 00:22:26.436 Identify (06h): Supported 00:22:26.436 Abort (08h): Supported 00:22:26.436 Set Features (09h): Supported 00:22:26.436 Get Features (0Ah): Supported 00:22:26.436 Asynchronous Event Request (0Ch): Supported 00:22:26.436 Namespace Attachment (15h): Supported NS-Inventory-Change 00:22:26.436 Directive Send (19h): Supported 00:22:26.436 Directive Receive (1Ah): Supported 00:22:26.436 Virtualization Management (1Ch): Supported 00:22:26.436 Doorbell Buffer Config (7Ch): Supported 00:22:26.436 Format NVM (80h): Supported LBA-Change 00:22:26.436 I/O Commands 00:22:26.436 ------------ 00:22:26.436 Flush (00h): Supported LBA-Change 00:22:26.436 Write (01h): Supported LBA-Change 00:22:26.436 Read (02h): Supported 00:22:26.436 Compare (05h): Supported 00:22:26.436 Write Zeroes (08h): Supported LBA-Change 00:22:26.436 Dataset Management (09h): Supported LBA-Change 00:22:26.436 Unknown (0Ch): Supported 00:22:26.436 Unknown (12h): Supported 00:22:26.436 Copy (19h): Supported LBA-Change 00:22:26.436 Unknown (1Dh): Supported LBA-Change 00:22:26.436 00:22:26.436 Error Log 00:22:26.436 ========= 00:22:26.436 00:22:26.436 Arbitration 00:22:26.436 =========== 00:22:26.436 Arbitration Burst: no limit 00:22:26.436 00:22:26.436 Power Management 00:22:26.436 ================ 00:22:26.436 Number of Power States: 1 00:22:26.436 Current Power State: Power State #0 00:22:26.436 Power State #0: 00:22:26.436 Max Power: 25.00 W 00:22:26.436 Non-Operational State: Operational 00:22:26.436 Entry Latency: 16 microseconds 00:22:26.436 Exit Latency: 4 microseconds 00:22:26.436 Relative Read Throughput: 0 00:22:26.436 Relative Read Latency: 0 00:22:26.436 Relative Write Throughput: 0 00:22:26.436 Relative Write Latency: 0 00:22:26.436 Idle Power: Not Reported 00:22:26.436 Active Power: Not Reported 00:22:26.436 Non-Operational Permissive Mode: Not Supported 00:22:26.436 00:22:26.436 Health Information 00:22:26.436 ================== 00:22:26.436 Critical Warnings: 00:22:26.436 Available Spare Space: OK 00:22:26.436 Temperature: OK 00:22:26.436 Device Reliability: OK 00:22:26.436 Read Only: No 00:22:26.436 Volatile Memory Backup: OK 00:22:26.436 Current Temperature: 323 Kelvin (50 Celsius) 00:22:26.436 Temperature Threshold: 343 Kelvin (70 Celsius) 00:22:26.436 Available Spare: 0% 00:22:26.436 Available Spare Threshold: 0% 00:22:26.436 Life Percentage Used: 0% 00:22:26.436 Data Units Read: 776 00:22:26.436 Data Units Written: 705 00:22:26.436 Host Read Commands: 30202 00:22:26.436 Host Write Commands: 29625 00:22:26.436 Controller Busy Time: 0 minutes 00:22:26.436 Power Cycles: 0 00:22:26.436 Power On Hours: 0 hours 00:22:26.436 Unsafe Shutdowns: 0 00:22:26.436 Unrecoverable Media Errors: 0 00:22:26.436 Lifetime Error Log Entries: 0 00:22:26.436 Warning Temperature Time: 0 minutes 00:22:26.436 Critical Temperature Time: 0 minutes 00:22:26.436 00:22:26.436 Number of Queues 00:22:26.436 ================ 00:22:26.436 Number of I/O Submission Queues: 64 00:22:26.436 Number of I/O Completion Queues: 64 00:22:26.436 00:22:26.436 ZNS Specific Controller Data 00:22:26.436 ============================ 00:22:26.436 Zone Append Size Limit: 0 00:22:26.436 00:22:26.436 00:22:26.436 Active Namespaces 00:22:26.436 ================= 00:22:26.436 Namespace ID:1 00:22:26.436 Error Recovery Timeout: Unlimited 00:22:26.436 Command Set Identifier: NVM (00h) 00:22:26.436 Deallocate: Supported 00:22:26.436 Deallocated/Unwritten Error: Supported 00:22:26.436 Deallocated Read Value: All 0x00 00:22:26.436 Deallocate in Write Zeroes: Not Supported 00:22:26.436 Deallocated Guard Field: 0xFFFF 00:22:26.436 Flush: Supported 00:22:26.436 Reservation: Not Supported 00:22:26.436 Namespace Sharing Capabilities: Multiple Controllers 00:22:26.436 Size (in LBAs): 262144 (1GiB) 00:22:26.436 Capacity (in LBAs): 262144 (1GiB) 00:22:26.436 Utilization (in LBAs): 262144 (1GiB) 00:22:26.436 Thin Provisioning: Not Supported 00:22:26.436 Per-NS Atomic Units: No 00:22:26.436 Maximum Single Source Range Length: 128 00:22:26.436 Maximum Copy Length: 128 00:22:26.436 Maximum Source Range Count: 128 00:22:26.436 NGUID/EUI64 Never Reused: No 00:22:26.436 Namespace Write Protected: No 00:22:26.436 Endurance group ID: 1 00:22:26.436 Number of LBA Formats: 8 00:22:26.436 Current LBA Format: LBA Format #04 00:22:26.436 LBA Format #00: Data Size: 512 Metadata Size: 0 00:22:26.436 LBA Format #01: Data Size: 512 Metadata Size: 8 00:22:26.436 LBA Format #02: Data Size: 512 Metadata Size: 16 00:22:26.436 LBA Format #03: Data Size: 512 Metadata Size: 64 00:22:26.436 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:22:26.436 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:22:26.436 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:22:26.436 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:22:26.436 00:22:26.436 Get Feature FDP: 00:22:26.436 ================ 00:22:26.436 Enabled: Yes 00:22:26.436 FDP configuration index: 0 00:22:26.436 00:22:26.436 FDP configurations log page 00:22:26.436 =========================== 00:22:26.436 Number of FDP configurations: 1 00:22:26.436 Version: 0 00:22:26.436 Size: 112 00:22:26.436 FDP Configuration Descriptor: 0 00:22:26.436 Descriptor Size: 96 00:22:26.436 Reclaim Group Identifier format: 2 00:22:26.436 FDP Volatile Write Cache: Not Present 00:22:26.436 FDP Configuration: Valid 00:22:26.436 Vendor Specific Size: 0 00:22:26.436 Number of Reclaim Groups: 2 00:22:26.436 Number of Recalim Unit Handles: 8 00:22:26.436 Max Placement Identifiers: 128 00:22:26.436 Number of Namespaces Suppprted: 256 00:22:26.436 Reclaim unit Nominal Size: 6000000 bytes 00:22:26.436 Estimated Reclaim Unit Time Limit: Not Reported 00:22:26.436 RUH Desc #000: RUH Type: Initially Isolated 00:22:26.436 RUH Desc #001: RUH Type: Initially Isolated 00:22:26.436 RUH Desc #002: RUH Type: Initially Isolated 00:22:26.436 RUH Desc #003: RUH Type: Initially Isolated 00:22:26.436 RUH Desc #004: RUH Type: Initially Isolated 00:22:26.436 RUH Desc #005: RUH Type: Initially Isolated 00:22:26.436 RUH Desc #006: RUH Type: Initially Isolated 00:22:26.436 RUH Desc #007: RUH Type: Initially Isolated 00:22:26.436 00:22:26.436 FDP reclaim unit handle usage log page 00:22:26.436 ====================================== 00:22:26.436 Number of Reclaim Unit Handles: 8 00:22:26.436 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:22:26.436 RUH Usage Desc #001: RUH Attributes: Unused 00:22:26.436 RUH Usage Desc #002: RUH Attributes: Unused 00:22:26.436 RUH Usage Desc #003: RUH Attributes: Unused 00:22:26.436 RUH Usage Desc #004: RUH Attributes: Unused 00:22:26.436 RUH Usage Desc #005: RUH Attributes: Unused 00:22:26.436 RUH Usage Desc #006: RUH Attributes: Unused 00:22:26.436 RUH Usage Desc #007: RUH Attributes: Unused 00:22:26.436 00:22:26.436 FDP statistics log page 00:22:26.436 ======================= 00:22:26.436 Host bytes with metadata written: 429826048 00:22:26.436 Media bytes with metadata written: 429891584 00:22:26.436 Media bytes erased: 0 00:22:26.436 00:22:26.436 FDP events log page 00:22:26.436 =================== 00:22:26.436 Number of FDP events: 0 00:22:26.436 00:22:26.436 NVM Specific Namespace Data 00:22:26.436 =========================== 00:22:26.437 Logical Block Storage Tag Mask: 0 00:22:26.437 Protection Information Capabilities: 00:22:26.437 16b Guard Protection Information Storage Tag Support: No 00:22:26.437 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:22:26.437 Storage Tag Check Read Support: No 00:22:26.437 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:26.437 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:26.437 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:26.437 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:26.437 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:26.437 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:26.437 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:26.437 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:26.437 ************************************ 00:22:26.437 END TEST nvme_identify 00:22:26.437 ************************************ 00:22:26.437 00:22:26.437 real 0m1.357s 00:22:26.437 user 0m0.477s 00:22:26.437 sys 0m0.648s 00:22:26.437 23:04:04 nvme.nvme_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:26.437 23:04:04 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:22:26.437 23:04:04 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:22:26.437 23:04:04 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:26.437 23:04:04 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:26.437 23:04:04 nvme -- common/autotest_common.sh@10 -- # set +x 00:22:26.437 ************************************ 00:22:26.437 START TEST nvme_perf 00:22:26.437 ************************************ 00:22:26.437 23:04:04 nvme.nvme_perf -- common/autotest_common.sh@1129 -- # nvme_perf 00:22:26.437 23:04:04 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:22:27.833 Initializing NVMe Controllers 00:22:27.833 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:22:27.833 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:22:27.833 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:22:27.833 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:22:27.833 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:22:27.833 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:22:27.833 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:22:27.833 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:22:27.833 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:22:27.833 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:22:27.833 Initialization complete. Launching workers. 00:22:27.833 ======================================================== 00:22:27.833 Latency(us) 00:22:27.833 Device Information : IOPS MiB/s Average min max 00:22:27.833 PCIE (0000:00:11.0) NSID 1 from core 0: 7176.23 84.10 17883.72 15008.72 47781.37 00:22:27.833 PCIE (0000:00:13.0) NSID 1 from core 0: 7176.23 84.10 17862.38 15066.26 46699.90 00:22:27.833 PCIE (0000:00:10.0) NSID 1 from core 0: 7176.23 84.10 17832.06 14347.48 45414.26 00:22:27.833 PCIE (0000:00:12.0) NSID 1 from core 0: 7176.23 84.10 17801.93 14067.72 43988.63 00:22:27.833 PCIE (0000:00:12.0) NSID 2 from core 0: 7176.23 84.10 17770.77 12533.46 44074.04 00:22:27.833 PCIE (0000:00:12.0) NSID 3 from core 0: 7239.74 84.84 17584.31 12315.61 33290.39 00:22:27.833 ======================================================== 00:22:27.833 Total : 43120.91 505.32 17788.89 12315.61 47781.37 00:22:27.833 00:22:27.833 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:22:27.833 ================================================================================= 00:22:27.833 1.00000% : 15426.166us 00:22:27.833 10.00000% : 16131.938us 00:22:27.833 25.00000% : 16535.237us 00:22:27.833 50.00000% : 17241.009us 00:22:27.833 75.00000% : 18047.606us 00:22:27.833 90.00000% : 19963.274us 00:22:27.833 95.00000% : 20769.871us 00:22:27.833 98.00000% : 22988.012us 00:22:27.833 99.00000% : 38111.705us 00:22:27.833 99.50000% : 46782.622us 00:22:27.833 99.90000% : 47589.218us 00:22:27.833 99.99000% : 47790.868us 00:22:27.833 99.99900% : 47790.868us 00:22:27.833 99.99990% : 47790.868us 00:22:27.833 99.99999% : 47790.868us 00:22:27.833 00:22:27.833 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:22:27.833 ================================================================================= 00:22:27.833 1.00000% : 15526.991us 00:22:27.833 10.00000% : 16131.938us 00:22:27.833 25.00000% : 16535.237us 00:22:27.833 50.00000% : 17241.009us 00:22:27.833 75.00000% : 18047.606us 00:22:27.833 90.00000% : 19660.800us 00:22:27.833 95.00000% : 20669.046us 00:22:27.833 98.00000% : 22282.240us 00:22:27.833 99.00000% : 37103.458us 00:22:27.833 99.50000% : 45774.375us 00:22:27.833 99.90000% : 46580.972us 00:22:27.833 99.99000% : 46782.622us 00:22:27.833 99.99900% : 46782.622us 00:22:27.833 99.99990% : 46782.622us 00:22:27.833 99.99999% : 46782.622us 00:22:27.833 00:22:27.833 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:22:27.833 ================================================================================= 00:22:27.833 1.00000% : 15325.342us 00:22:27.833 10.00000% : 16031.114us 00:22:27.833 25.00000% : 16535.237us 00:22:27.833 50.00000% : 17241.009us 00:22:27.833 75.00000% : 18148.431us 00:22:27.833 90.00000% : 19459.151us 00:22:27.833 95.00000% : 20870.695us 00:22:27.833 98.00000% : 22181.415us 00:22:27.833 99.00000% : 35490.265us 00:22:27.833 99.50000% : 44362.831us 00:22:27.833 99.90000% : 45371.077us 00:22:27.833 99.99000% : 45572.726us 00:22:27.833 99.99900% : 45572.726us 00:22:27.833 99.99990% : 45572.726us 00:22:27.833 99.99999% : 45572.726us 00:22:27.833 00:22:27.833 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:22:27.833 ================================================================================= 00:22:27.833 1.00000% : 15224.517us 00:22:27.833 10.00000% : 16131.938us 00:22:27.833 25.00000% : 16636.062us 00:22:27.833 50.00000% : 17241.009us 00:22:27.833 75.00000% : 18148.431us 00:22:27.833 90.00000% : 19559.975us 00:22:27.833 95.00000% : 20971.520us 00:22:27.833 98.00000% : 22181.415us 00:22:27.833 99.00000% : 33675.422us 00:22:27.833 99.50000% : 43152.935us 00:22:27.833 99.90000% : 43959.532us 00:22:27.833 99.99000% : 44161.182us 00:22:27.833 99.99900% : 44161.182us 00:22:27.833 99.99990% : 44161.182us 00:22:27.833 99.99999% : 44161.182us 00:22:27.833 00:22:27.833 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:22:27.833 ================================================================================= 00:22:27.833 1.00000% : 15022.868us 00:22:27.833 10.00000% : 16031.114us 00:22:27.833 25.00000% : 16535.237us 00:22:27.833 50.00000% : 17241.009us 00:22:27.833 75.00000% : 18148.431us 00:22:27.833 90.00000% : 19559.975us 00:22:27.833 95.00000% : 20971.520us 00:22:27.833 98.00000% : 22685.538us 00:22:27.833 99.00000% : 33272.123us 00:22:27.833 99.50000% : 43152.935us 00:22:27.833 99.90000% : 43959.532us 00:22:27.833 99.99000% : 44161.182us 00:22:27.833 99.99900% : 44161.182us 00:22:27.833 99.99990% : 44161.182us 00:22:27.833 99.99999% : 44161.182us 00:22:27.833 00:22:27.833 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:22:27.833 ================================================================================= 00:22:27.833 1.00000% : 15123.692us 00:22:27.833 10.00000% : 16031.114us 00:22:27.833 25.00000% : 16535.237us 00:22:27.833 50.00000% : 17140.185us 00:22:27.833 75.00000% : 18047.606us 00:22:27.833 90.00000% : 19862.449us 00:22:27.833 95.00000% : 20971.520us 00:22:27.833 98.00000% : 23088.837us 00:22:27.833 99.00000% : 24399.557us 00:22:27.833 99.50000% : 32465.526us 00:22:27.833 99.90000% : 33272.123us 00:22:27.833 99.99000% : 33473.772us 00:22:27.833 99.99900% : 33473.772us 00:22:27.833 99.99990% : 33473.772us 00:22:27.833 99.99999% : 33473.772us 00:22:27.833 00:22:27.833 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:22:27.833 ============================================================================== 00:22:27.833 Range in us Cumulative IO count 00:22:27.833 14922.043 - 15022.868: 0.0138% ( 1) 00:22:27.833 15022.868 - 15123.692: 0.1521% ( 10) 00:22:27.833 15123.692 - 15224.517: 0.4010% ( 18) 00:22:27.833 15224.517 - 15325.342: 0.7190% ( 23) 00:22:27.833 15325.342 - 15426.166: 1.2445% ( 38) 00:22:27.834 15426.166 - 15526.991: 1.9773% ( 53) 00:22:27.834 15526.991 - 15627.815: 2.8208% ( 61) 00:22:27.834 15627.815 - 15728.640: 4.0376% ( 88) 00:22:27.834 15728.640 - 15829.465: 5.6139% ( 114) 00:22:27.834 15829.465 - 15930.289: 7.5083% ( 137) 00:22:27.834 15930.289 - 16031.114: 9.5962% ( 151) 00:22:27.834 16031.114 - 16131.938: 12.0437% ( 177) 00:22:27.834 16131.938 - 16232.763: 15.0166% ( 215) 00:22:27.834 16232.763 - 16333.588: 18.1692% ( 228) 00:22:27.834 16333.588 - 16434.412: 21.6123% ( 249) 00:22:27.834 16434.412 - 16535.237: 25.2074% ( 260) 00:22:27.834 16535.237 - 16636.062: 29.2035% ( 289) 00:22:27.834 16636.062 - 16736.886: 33.2826% ( 295) 00:22:27.834 16736.886 - 16837.711: 37.4032% ( 298) 00:22:27.834 16837.711 - 16938.535: 41.4961% ( 296) 00:22:27.834 16938.535 - 17039.360: 45.1466% ( 264) 00:22:27.834 17039.360 - 17140.185: 48.7970% ( 264) 00:22:27.834 17140.185 - 17241.009: 52.3783% ( 259) 00:22:27.834 17241.009 - 17341.834: 56.1809% ( 275) 00:22:27.834 17341.834 - 17442.658: 59.6101% ( 248) 00:22:27.834 17442.658 - 17543.483: 62.7074% ( 224) 00:22:27.834 17543.483 - 17644.308: 65.6803% ( 215) 00:22:27.834 17644.308 - 17745.132: 68.6809% ( 217) 00:22:27.834 17745.132 - 17845.957: 71.4463% ( 200) 00:22:27.834 17845.957 - 17946.782: 73.9906% ( 184) 00:22:27.834 17946.782 - 18047.606: 76.1200% ( 154) 00:22:27.834 18047.606 - 18148.431: 77.8485% ( 125) 00:22:27.834 18148.431 - 18249.255: 79.4663% ( 117) 00:22:27.834 18249.255 - 18350.080: 80.8905% ( 103) 00:22:27.834 18350.080 - 18450.905: 81.9690% ( 78) 00:22:27.834 18450.905 - 18551.729: 82.8816% ( 66) 00:22:27.834 18551.729 - 18652.554: 83.6007% ( 52) 00:22:27.834 18652.554 - 18753.378: 84.1952% ( 43) 00:22:27.834 18753.378 - 18854.203: 84.6930% ( 36) 00:22:27.834 18854.203 - 18955.028: 85.0387% ( 25) 00:22:27.834 18955.028 - 19055.852: 85.2876% ( 18) 00:22:27.834 19055.852 - 19156.677: 85.5088% ( 16) 00:22:27.834 19156.677 - 19257.502: 85.8407% ( 24) 00:22:27.834 19257.502 - 19358.326: 86.3247% ( 35) 00:22:27.834 19358.326 - 19459.151: 86.9192% ( 43) 00:22:27.834 19459.151 - 19559.975: 87.5968% ( 49) 00:22:27.834 19559.975 - 19660.800: 88.3158% ( 52) 00:22:27.834 19660.800 - 19761.625: 89.0763% ( 55) 00:22:27.834 19761.625 - 19862.449: 89.8921% ( 59) 00:22:27.834 19862.449 - 19963.274: 90.5697% ( 49) 00:22:27.834 19963.274 - 20064.098: 91.2334% ( 48) 00:22:27.834 20064.098 - 20164.923: 91.9524% ( 52) 00:22:27.834 20164.923 - 20265.748: 92.6162% ( 48) 00:22:27.834 20265.748 - 20366.572: 93.3628% ( 54) 00:22:27.834 20366.572 - 20467.397: 93.9159% ( 40) 00:22:27.834 20467.397 - 20568.222: 94.3169% ( 29) 00:22:27.834 20568.222 - 20669.046: 94.7871% ( 34) 00:22:27.834 20669.046 - 20769.871: 95.2019% ( 30) 00:22:27.834 20769.871 - 20870.695: 95.5752% ( 27) 00:22:27.834 20870.695 - 20971.520: 95.8794% ( 22) 00:22:27.834 20971.520 - 21072.345: 96.1145% ( 17) 00:22:27.834 21072.345 - 21173.169: 96.3219% ( 15) 00:22:27.834 21173.169 - 21273.994: 96.4740% ( 11) 00:22:27.834 21273.994 - 21374.818: 96.6261% ( 11) 00:22:27.834 21374.818 - 21475.643: 96.7782% ( 11) 00:22:27.834 21475.643 - 21576.468: 96.9441% ( 12) 00:22:27.834 21576.468 - 21677.292: 97.0824% ( 10) 00:22:27.834 21677.292 - 21778.117: 97.2483% ( 12) 00:22:27.834 21778.117 - 21878.942: 97.3313% ( 6) 00:22:27.834 21878.942 - 21979.766: 97.3451% ( 1) 00:22:27.834 21979.766 - 22080.591: 97.3590% ( 1) 00:22:27.834 22080.591 - 22181.415: 97.4281% ( 5) 00:22:27.834 22181.415 - 22282.240: 97.4834% ( 4) 00:22:27.834 22282.240 - 22383.065: 97.5664% ( 6) 00:22:27.834 22383.065 - 22483.889: 97.6355% ( 5) 00:22:27.834 22483.889 - 22584.714: 97.7185% ( 6) 00:22:27.834 22584.714 - 22685.538: 97.7876% ( 5) 00:22:27.834 22685.538 - 22786.363: 97.8706% ( 6) 00:22:27.834 22786.363 - 22887.188: 97.9397% ( 5) 00:22:27.834 22887.188 - 22988.012: 98.0227% ( 6) 00:22:27.834 22988.012 - 23088.837: 98.1056% ( 6) 00:22:27.834 23088.837 - 23189.662: 98.1748% ( 5) 00:22:27.834 23189.662 - 23290.486: 98.2301% ( 4) 00:22:27.834 36498.511 - 36700.160: 98.2992% ( 5) 00:22:27.834 36700.160 - 36901.809: 98.3960% ( 7) 00:22:27.834 36901.809 - 37103.458: 98.5066% ( 8) 00:22:27.834 37103.458 - 37305.108: 98.6034% ( 7) 00:22:27.834 37305.108 - 37506.757: 98.7140% ( 8) 00:22:27.834 37506.757 - 37708.406: 98.8108% ( 7) 00:22:27.834 37708.406 - 37910.055: 98.9215% ( 8) 00:22:27.834 37910.055 - 38111.705: 99.0183% ( 7) 00:22:27.834 38111.705 - 38313.354: 99.1150% ( 7) 00:22:27.834 45976.025 - 46177.674: 99.1980% ( 6) 00:22:27.834 46177.674 - 46379.323: 99.2948% ( 7) 00:22:27.834 46379.323 - 46580.972: 99.4054% ( 8) 00:22:27.834 46580.972 - 46782.622: 99.5022% ( 7) 00:22:27.834 46782.622 - 46984.271: 99.5990% ( 7) 00:22:27.834 46984.271 - 47185.920: 99.6958% ( 7) 00:22:27.834 47185.920 - 47387.569: 99.8064% ( 8) 00:22:27.834 47387.569 - 47589.218: 99.9032% ( 7) 00:22:27.834 47589.218 - 47790.868: 100.0000% ( 7) 00:22:27.834 00:22:27.834 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:22:27.834 ============================================================================== 00:22:27.834 Range in us Cumulative IO count 00:22:27.834 15022.868 - 15123.692: 0.0277% ( 2) 00:22:27.834 15123.692 - 15224.517: 0.2074% ( 13) 00:22:27.834 15224.517 - 15325.342: 0.5116% ( 22) 00:22:27.834 15325.342 - 15426.166: 0.8711% ( 26) 00:22:27.834 15426.166 - 15526.991: 1.3827% ( 37) 00:22:27.834 15526.991 - 15627.815: 2.1433% ( 55) 00:22:27.834 15627.815 - 15728.640: 3.2633% ( 81) 00:22:27.834 15728.640 - 15829.465: 4.6322% ( 99) 00:22:27.834 15829.465 - 15930.289: 6.3468% ( 124) 00:22:27.834 15930.289 - 16031.114: 8.6421% ( 166) 00:22:27.834 16031.114 - 16131.938: 11.1726% ( 183) 00:22:27.834 16131.938 - 16232.763: 14.5603% ( 245) 00:22:27.834 16232.763 - 16333.588: 18.0033% ( 249) 00:22:27.834 16333.588 - 16434.412: 21.9027% ( 282) 00:22:27.834 16434.412 - 16535.237: 25.7190% ( 276) 00:22:27.834 16535.237 - 16636.062: 29.4110% ( 267) 00:22:27.834 16636.062 - 16736.886: 33.1720% ( 272) 00:22:27.834 16736.886 - 16837.711: 36.6704% ( 253) 00:22:27.834 16837.711 - 16938.535: 40.3208% ( 264) 00:22:27.834 16938.535 - 17039.360: 43.6670% ( 242) 00:22:27.834 17039.360 - 17140.185: 47.2622% ( 260) 00:22:27.834 17140.185 - 17241.009: 51.0924% ( 277) 00:22:27.834 17241.009 - 17341.834: 54.6598% ( 258) 00:22:27.834 17341.834 - 17442.658: 58.0614% ( 246) 00:22:27.834 17442.658 - 17543.483: 61.4215% ( 243) 00:22:27.834 17543.483 - 17644.308: 64.5188% ( 224) 00:22:27.834 17644.308 - 17745.132: 67.5332% ( 218) 00:22:27.834 17745.132 - 17845.957: 70.4646% ( 212) 00:22:27.834 17845.957 - 17946.782: 72.9535% ( 180) 00:22:27.834 17946.782 - 18047.606: 75.3733% ( 175) 00:22:27.834 18047.606 - 18148.431: 77.5442% ( 157) 00:22:27.834 18148.431 - 18249.255: 79.3418% ( 130) 00:22:27.834 18249.255 - 18350.080: 80.7660% ( 103) 00:22:27.834 18350.080 - 18450.905: 81.8446% ( 78) 00:22:27.834 18450.905 - 18551.729: 82.7848% ( 68) 00:22:27.834 18551.729 - 18652.554: 83.6283% ( 61) 00:22:27.834 18652.554 - 18753.378: 84.2782% ( 47) 00:22:27.834 18753.378 - 18854.203: 84.9696% ( 50) 00:22:27.834 18854.203 - 18955.028: 85.5780% ( 44) 00:22:27.834 18955.028 - 19055.852: 86.2694% ( 50) 00:22:27.834 19055.852 - 19156.677: 86.9054% ( 46) 00:22:27.834 19156.677 - 19257.502: 87.4723% ( 41) 00:22:27.834 19257.502 - 19358.326: 88.1361% ( 48) 00:22:27.834 19358.326 - 19459.151: 88.7860% ( 47) 00:22:27.834 19459.151 - 19559.975: 89.4220% ( 46) 00:22:27.834 19559.975 - 19660.800: 90.0442% ( 45) 00:22:27.834 19660.800 - 19761.625: 90.5282% ( 35) 00:22:27.834 19761.625 - 19862.449: 90.9983% ( 34) 00:22:27.834 19862.449 - 19963.274: 91.4546% ( 33) 00:22:27.834 19963.274 - 20064.098: 92.0216% ( 41) 00:22:27.834 20064.098 - 20164.923: 92.6853% ( 48) 00:22:27.834 20164.923 - 20265.748: 93.2384% ( 40) 00:22:27.834 20265.748 - 20366.572: 93.7915% ( 40) 00:22:27.834 20366.572 - 20467.397: 94.2063% ( 30) 00:22:27.834 20467.397 - 20568.222: 94.6764% ( 34) 00:22:27.834 20568.222 - 20669.046: 95.0913% ( 30) 00:22:27.834 20669.046 - 20769.871: 95.4646% ( 27) 00:22:27.834 20769.871 - 20870.695: 95.7688% ( 22) 00:22:27.834 20870.695 - 20971.520: 96.0177% ( 18) 00:22:27.834 20971.520 - 21072.345: 96.1975% ( 13) 00:22:27.834 21072.345 - 21173.169: 96.4049% ( 15) 00:22:27.834 21173.169 - 21273.994: 96.5708% ( 12) 00:22:27.834 21273.994 - 21374.818: 96.7644% ( 14) 00:22:27.834 21374.818 - 21475.643: 96.8750% ( 8) 00:22:27.834 21475.643 - 21576.468: 96.9994% ( 9) 00:22:27.834 21576.468 - 21677.292: 97.1377% ( 10) 00:22:27.834 21677.292 - 21778.117: 97.3037% ( 12) 00:22:27.834 21778.117 - 21878.942: 97.4558% ( 11) 00:22:27.834 21878.942 - 21979.766: 97.5940% ( 10) 00:22:27.835 21979.766 - 22080.591: 97.7323% ( 10) 00:22:27.835 22080.591 - 22181.415: 97.8706% ( 10) 00:22:27.835 22181.415 - 22282.240: 98.0227% ( 11) 00:22:27.835 22282.240 - 22383.065: 98.1610% ( 10) 00:22:27.835 22383.065 - 22483.889: 98.2301% ( 5) 00:22:27.835 35288.615 - 35490.265: 98.2716% ( 3) 00:22:27.835 35490.265 - 35691.914: 98.3684% ( 7) 00:22:27.835 35691.914 - 35893.563: 98.4790% ( 8) 00:22:27.835 35893.563 - 36095.212: 98.5758% ( 7) 00:22:27.835 36095.212 - 36296.862: 98.6726% ( 7) 00:22:27.835 36296.862 - 36498.511: 98.7555% ( 6) 00:22:27.835 36498.511 - 36700.160: 98.8385% ( 6) 00:22:27.835 36700.160 - 36901.809: 98.9353% ( 7) 00:22:27.835 36901.809 - 37103.458: 99.0321% ( 7) 00:22:27.835 37103.458 - 37305.108: 99.1150% ( 6) 00:22:27.835 44766.129 - 44967.778: 99.1565% ( 3) 00:22:27.835 44967.778 - 45169.428: 99.2533% ( 7) 00:22:27.835 45169.428 - 45371.077: 99.3363% ( 6) 00:22:27.835 45371.077 - 45572.726: 99.4331% ( 7) 00:22:27.835 45572.726 - 45774.375: 99.5437% ( 8) 00:22:27.835 45774.375 - 45976.025: 99.6405% ( 7) 00:22:27.835 45976.025 - 46177.674: 99.7373% ( 7) 00:22:27.835 46177.674 - 46379.323: 99.8479% ( 8) 00:22:27.835 46379.323 - 46580.972: 99.9447% ( 7) 00:22:27.835 46580.972 - 46782.622: 100.0000% ( 4) 00:22:27.835 00:22:27.835 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:22:27.835 ============================================================================== 00:22:27.835 Range in us Cumulative IO count 00:22:27.835 14317.095 - 14417.920: 0.0553% ( 4) 00:22:27.835 14417.920 - 14518.745: 0.1106% ( 4) 00:22:27.835 14518.745 - 14619.569: 0.1659% ( 4) 00:22:27.835 14619.569 - 14720.394: 0.2351% ( 5) 00:22:27.835 14720.394 - 14821.218: 0.2765% ( 3) 00:22:27.835 14821.218 - 14922.043: 0.3733% ( 7) 00:22:27.835 14922.043 - 15022.868: 0.4701% ( 7) 00:22:27.835 15022.868 - 15123.692: 0.6084% ( 10) 00:22:27.835 15123.692 - 15224.517: 0.8435% ( 17) 00:22:27.835 15224.517 - 15325.342: 1.3551% ( 37) 00:22:27.835 15325.342 - 15426.166: 2.0326% ( 49) 00:22:27.835 15426.166 - 15526.991: 2.8761% ( 61) 00:22:27.835 15526.991 - 15627.815: 3.8164% ( 68) 00:22:27.835 15627.815 - 15728.640: 4.6875% ( 63) 00:22:27.835 15728.640 - 15829.465: 6.1670% ( 107) 00:22:27.835 15829.465 - 15930.289: 7.9369% ( 128) 00:22:27.835 15930.289 - 16031.114: 10.1908% ( 163) 00:22:27.835 16031.114 - 16131.938: 12.9287% ( 198) 00:22:27.835 16131.938 - 16232.763: 15.6250% ( 195) 00:22:27.835 16232.763 - 16333.588: 18.6256% ( 217) 00:22:27.835 16333.588 - 16434.412: 21.6952% ( 222) 00:22:27.835 16434.412 - 16535.237: 25.1521% ( 250) 00:22:27.835 16535.237 - 16636.062: 28.8579% ( 268) 00:22:27.835 16636.062 - 16736.886: 32.4253% ( 258) 00:22:27.835 16736.886 - 16837.711: 35.8407% ( 247) 00:22:27.835 16837.711 - 16938.535: 39.7677% ( 284) 00:22:27.835 16938.535 - 17039.360: 43.6670% ( 282) 00:22:27.835 17039.360 - 17140.185: 47.1654% ( 253) 00:22:27.835 17140.185 - 17241.009: 50.5946% ( 248) 00:22:27.835 17241.009 - 17341.834: 54.2312% ( 263) 00:22:27.835 17341.834 - 17442.658: 57.1764% ( 213) 00:22:27.835 17442.658 - 17543.483: 60.1217% ( 213) 00:22:27.835 17543.483 - 17644.308: 62.7074% ( 187) 00:22:27.835 17644.308 - 17745.132: 65.5559% ( 206) 00:22:27.835 17745.132 - 17845.957: 67.7683% ( 160) 00:22:27.835 17845.957 - 17946.782: 70.2710% ( 181) 00:22:27.835 17946.782 - 18047.606: 72.6493% ( 172) 00:22:27.835 18047.606 - 18148.431: 75.0553% ( 174) 00:22:27.835 18148.431 - 18249.255: 77.0050% ( 141) 00:22:27.835 18249.255 - 18350.080: 78.6919% ( 122) 00:22:27.835 18350.080 - 18450.905: 80.3927% ( 123) 00:22:27.835 18450.905 - 18551.729: 81.7893% ( 101) 00:22:27.835 18551.729 - 18652.554: 83.3241% ( 111) 00:22:27.835 18652.554 - 18753.378: 84.4718% ( 83) 00:22:27.835 18753.378 - 18854.203: 85.6886% ( 88) 00:22:27.835 18854.203 - 18955.028: 86.6980% ( 73) 00:22:27.835 18955.028 - 19055.852: 87.5277% ( 60) 00:22:27.835 19055.852 - 19156.677: 88.4541% ( 67) 00:22:27.835 19156.677 - 19257.502: 89.3529% ( 65) 00:22:27.835 19257.502 - 19358.326: 89.9336% ( 42) 00:22:27.835 19358.326 - 19459.151: 90.4867% ( 40) 00:22:27.835 19459.151 - 19559.975: 90.8186% ( 24) 00:22:27.835 19559.975 - 19660.800: 91.3855% ( 41) 00:22:27.835 19660.800 - 19761.625: 91.7174% ( 24) 00:22:27.835 19761.625 - 19862.449: 92.1598% ( 32) 00:22:27.835 19862.449 - 19963.274: 92.5470% ( 28) 00:22:27.835 19963.274 - 20064.098: 92.7406% ( 14) 00:22:27.835 20064.098 - 20164.923: 93.1001% ( 26) 00:22:27.835 20164.923 - 20265.748: 93.4181% ( 23) 00:22:27.835 20265.748 - 20366.572: 93.6532% ( 17) 00:22:27.835 20366.572 - 20467.397: 93.8883% ( 17) 00:22:27.835 20467.397 - 20568.222: 94.2063% ( 23) 00:22:27.835 20568.222 - 20669.046: 94.4552% ( 18) 00:22:27.835 20669.046 - 20769.871: 94.8424% ( 28) 00:22:27.835 20769.871 - 20870.695: 95.0636% ( 16) 00:22:27.835 20870.695 - 20971.520: 95.4231% ( 26) 00:22:27.835 20971.520 - 21072.345: 95.6444% ( 16) 00:22:27.835 21072.345 - 21173.169: 96.0039% ( 26) 00:22:27.835 21173.169 - 21273.994: 96.4325% ( 31) 00:22:27.835 21273.994 - 21374.818: 96.6123% ( 13) 00:22:27.835 21374.818 - 21475.643: 96.9441% ( 24) 00:22:27.835 21475.643 - 21576.468: 97.1377% ( 14) 00:22:27.835 21576.468 - 21677.292: 97.3313% ( 14) 00:22:27.835 21677.292 - 21778.117: 97.4558% ( 9) 00:22:27.835 21778.117 - 21878.942: 97.7323% ( 20) 00:22:27.835 21878.942 - 21979.766: 97.8844% ( 11) 00:22:27.835 21979.766 - 22080.591: 97.9950% ( 8) 00:22:27.835 22080.591 - 22181.415: 98.1195% ( 9) 00:22:27.835 22181.415 - 22282.240: 98.1748% ( 4) 00:22:27.835 22282.240 - 22383.065: 98.2301% ( 4) 00:22:27.835 33675.422 - 33877.071: 98.3269% ( 7) 00:22:27.835 33877.071 - 34078.720: 98.4098% ( 6) 00:22:27.835 34078.720 - 34280.369: 98.5205% ( 8) 00:22:27.835 34280.369 - 34482.018: 98.5758% ( 4) 00:22:27.835 34482.018 - 34683.668: 98.7002% ( 9) 00:22:27.835 34683.668 - 34885.317: 98.7694% ( 5) 00:22:27.835 34885.317 - 35086.966: 98.8662% ( 7) 00:22:27.835 35086.966 - 35288.615: 98.9629% ( 7) 00:22:27.835 35288.615 - 35490.265: 99.0597% ( 7) 00:22:27.835 35490.265 - 35691.914: 99.1150% ( 4) 00:22:27.835 43354.585 - 43556.234: 99.1565% ( 3) 00:22:27.835 43556.234 - 43757.883: 99.2257% ( 5) 00:22:27.835 43757.883 - 43959.532: 99.3363% ( 8) 00:22:27.835 43959.532 - 44161.182: 99.3916% ( 4) 00:22:27.835 44161.182 - 44362.831: 99.5022% ( 8) 00:22:27.835 44362.831 - 44564.480: 99.5990% ( 7) 00:22:27.835 44564.480 - 44766.129: 99.6958% ( 7) 00:22:27.835 44766.129 - 44967.778: 99.7788% ( 6) 00:22:27.835 44967.778 - 45169.428: 99.8617% ( 6) 00:22:27.835 45169.428 - 45371.077: 99.9723% ( 8) 00:22:27.835 45371.077 - 45572.726: 100.0000% ( 2) 00:22:27.835 00:22:27.835 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:22:27.835 ============================================================================== 00:22:27.835 Range in us Cumulative IO count 00:22:27.835 14014.622 - 14115.446: 0.0415% ( 3) 00:22:27.835 14115.446 - 14216.271: 0.0968% ( 4) 00:22:27.835 14216.271 - 14317.095: 0.1798% ( 6) 00:22:27.835 14317.095 - 14417.920: 0.2489% ( 5) 00:22:27.835 14417.920 - 14518.745: 0.3042% ( 4) 00:22:27.835 14518.745 - 14619.569: 0.3733% ( 5) 00:22:27.835 14619.569 - 14720.394: 0.4425% ( 5) 00:22:27.835 14720.394 - 14821.218: 0.5116% ( 5) 00:22:27.835 14821.218 - 14922.043: 0.5808% ( 5) 00:22:27.835 14922.043 - 15022.868: 0.7190% ( 10) 00:22:27.835 15022.868 - 15123.692: 0.9541% ( 17) 00:22:27.835 15123.692 - 15224.517: 1.4104% ( 33) 00:22:27.835 15224.517 - 15325.342: 1.8805% ( 34) 00:22:27.835 15325.342 - 15426.166: 2.3507% ( 34) 00:22:27.835 15426.166 - 15526.991: 2.7378% ( 28) 00:22:27.835 15526.991 - 15627.815: 3.3186% ( 42) 00:22:27.835 15627.815 - 15728.640: 4.3142% ( 72) 00:22:27.835 15728.640 - 15829.465: 5.5586% ( 90) 00:22:27.835 15829.465 - 15930.289: 6.8861% ( 96) 00:22:27.835 15930.289 - 16031.114: 8.7942% ( 138) 00:22:27.835 16031.114 - 16131.938: 11.1864% ( 173) 00:22:27.835 16131.938 - 16232.763: 13.8274% ( 191) 00:22:27.835 16232.763 - 16333.588: 16.7865% ( 214) 00:22:27.835 16333.588 - 16434.412: 19.9668% ( 230) 00:22:27.835 16434.412 - 16535.237: 23.7140% ( 271) 00:22:27.835 16535.237 - 16636.062: 27.7793% ( 294) 00:22:27.835 16636.062 - 16736.886: 31.8446% ( 294) 00:22:27.835 16736.886 - 16837.711: 35.9237% ( 295) 00:22:27.835 16837.711 - 16938.535: 40.1272% ( 304) 00:22:27.836 16938.535 - 17039.360: 44.3308% ( 304) 00:22:27.836 17039.360 - 17140.185: 47.9121% ( 259) 00:22:27.836 17140.185 - 17241.009: 51.4381% ( 255) 00:22:27.836 17241.009 - 17341.834: 54.7013% ( 236) 00:22:27.836 17341.834 - 17442.658: 58.0337% ( 241) 00:22:27.836 17442.658 - 17543.483: 61.1449% ( 225) 00:22:27.836 17543.483 - 17644.308: 63.9104% ( 200) 00:22:27.836 17644.308 - 17745.132: 66.5929% ( 194) 00:22:27.836 17745.132 - 17845.957: 69.2201% ( 190) 00:22:27.836 17845.957 - 17946.782: 71.5708% ( 170) 00:22:27.836 17946.782 - 18047.606: 74.0459% ( 179) 00:22:27.836 18047.606 - 18148.431: 76.3551% ( 167) 00:22:27.836 18148.431 - 18249.255: 78.2909% ( 140) 00:22:27.836 18249.255 - 18350.080: 80.0608% ( 128) 00:22:27.836 18350.080 - 18450.905: 81.7616% ( 123) 00:22:27.836 18450.905 - 18551.729: 83.2135% ( 105) 00:22:27.836 18551.729 - 18652.554: 84.4027% ( 86) 00:22:27.836 18652.554 - 18753.378: 85.5227% ( 81) 00:22:27.836 18753.378 - 18854.203: 86.2417% ( 52) 00:22:27.836 18854.203 - 18955.028: 86.9884% ( 54) 00:22:27.836 18955.028 - 19055.852: 87.6106% ( 45) 00:22:27.836 19055.852 - 19156.677: 88.1361% ( 38) 00:22:27.836 19156.677 - 19257.502: 88.6477% ( 37) 00:22:27.836 19257.502 - 19358.326: 89.1593% ( 37) 00:22:27.836 19358.326 - 19459.151: 89.6294% ( 34) 00:22:27.836 19459.151 - 19559.975: 90.1410% ( 37) 00:22:27.836 19559.975 - 19660.800: 90.7080% ( 41) 00:22:27.836 19660.800 - 19761.625: 91.3717% ( 48) 00:22:27.836 19761.625 - 19862.449: 91.9524% ( 42) 00:22:27.836 19862.449 - 19963.274: 92.4640% ( 37) 00:22:27.836 19963.274 - 20064.098: 92.8789% ( 30) 00:22:27.836 20064.098 - 20164.923: 93.2246% ( 25) 00:22:27.836 20164.923 - 20265.748: 93.5702% ( 25) 00:22:27.836 20265.748 - 20366.572: 93.9298% ( 26) 00:22:27.836 20366.572 - 20467.397: 94.2340% ( 22) 00:22:27.836 20467.397 - 20568.222: 94.4552% ( 16) 00:22:27.836 20568.222 - 20669.046: 94.6903% ( 17) 00:22:27.836 20669.046 - 20769.871: 94.8424% ( 11) 00:22:27.836 20769.871 - 20870.695: 94.9945% ( 11) 00:22:27.836 20870.695 - 20971.520: 95.1604% ( 12) 00:22:27.836 20971.520 - 21072.345: 95.3125% ( 11) 00:22:27.836 21072.345 - 21173.169: 95.5337% ( 16) 00:22:27.836 21173.169 - 21273.994: 95.8103% ( 20) 00:22:27.836 21273.994 - 21374.818: 96.1145% ( 22) 00:22:27.836 21374.818 - 21475.643: 96.4187% ( 22) 00:22:27.836 21475.643 - 21576.468: 96.7091% ( 21) 00:22:27.836 21576.468 - 21677.292: 96.9580% ( 18) 00:22:27.836 21677.292 - 21778.117: 97.1792% ( 16) 00:22:27.836 21778.117 - 21878.942: 97.4143% ( 17) 00:22:27.836 21878.942 - 21979.766: 97.6493% ( 17) 00:22:27.836 21979.766 - 22080.591: 97.8844% ( 17) 00:22:27.836 22080.591 - 22181.415: 98.0642% ( 13) 00:22:27.836 22181.415 - 22282.240: 98.2163% ( 11) 00:22:27.836 22282.240 - 22383.065: 98.2301% ( 1) 00:22:27.836 32062.228 - 32263.877: 98.3269% ( 7) 00:22:27.836 32263.877 - 32465.526: 98.4098% ( 6) 00:22:27.836 32465.526 - 32667.175: 98.5066% ( 7) 00:22:27.836 32667.175 - 32868.825: 98.6173% ( 8) 00:22:27.836 32868.825 - 33070.474: 98.7140% ( 7) 00:22:27.836 33070.474 - 33272.123: 98.8108% ( 7) 00:22:27.836 33272.123 - 33473.772: 98.9215% ( 8) 00:22:27.836 33473.772 - 33675.422: 99.0044% ( 6) 00:22:27.836 33675.422 - 33877.071: 99.1012% ( 7) 00:22:27.836 33877.071 - 34078.720: 99.1150% ( 1) 00:22:27.836 41943.040 - 42144.689: 99.1289% ( 1) 00:22:27.836 42144.689 - 42346.338: 99.2118% ( 6) 00:22:27.836 42346.338 - 42547.988: 99.3086% ( 7) 00:22:27.836 42547.988 - 42749.637: 99.4054% ( 7) 00:22:27.836 42749.637 - 42951.286: 99.4884% ( 6) 00:22:27.836 42951.286 - 43152.935: 99.5852% ( 7) 00:22:27.836 43152.935 - 43354.585: 99.6820% ( 7) 00:22:27.836 43354.585 - 43556.234: 99.7788% ( 7) 00:22:27.836 43556.234 - 43757.883: 99.8756% ( 7) 00:22:27.836 43757.883 - 43959.532: 99.9723% ( 7) 00:22:27.836 43959.532 - 44161.182: 100.0000% ( 2) 00:22:27.836 00:22:27.836 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:22:27.836 ============================================================================== 00:22:27.836 Range in us Cumulative IO count 00:22:27.836 12502.252 - 12552.665: 0.0138% ( 1) 00:22:27.836 12552.665 - 12603.077: 0.0553% ( 3) 00:22:27.836 12603.077 - 12653.489: 0.0830% ( 2) 00:22:27.836 12653.489 - 12703.902: 0.1244% ( 3) 00:22:27.836 12703.902 - 12754.314: 0.1521% ( 2) 00:22:27.836 12754.314 - 12804.726: 0.1798% ( 2) 00:22:27.836 12804.726 - 12855.138: 0.2074% ( 2) 00:22:27.836 12855.138 - 12905.551: 0.2489% ( 3) 00:22:27.836 12905.551 - 13006.375: 0.3042% ( 4) 00:22:27.836 13006.375 - 13107.200: 0.3733% ( 5) 00:22:27.836 13107.200 - 13208.025: 0.4287% ( 4) 00:22:27.836 13208.025 - 13308.849: 0.4978% ( 5) 00:22:27.836 13308.849 - 13409.674: 0.5531% ( 4) 00:22:27.836 13409.674 - 13510.498: 0.6222% ( 5) 00:22:27.836 13510.498 - 13611.323: 0.6775% ( 4) 00:22:27.836 13611.323 - 13712.148: 0.7467% ( 5) 00:22:27.836 13712.148 - 13812.972: 0.7882% ( 3) 00:22:27.836 13812.972 - 13913.797: 0.8435% ( 4) 00:22:27.836 13913.797 - 14014.622: 0.8711% ( 2) 00:22:27.836 14014.622 - 14115.446: 0.8850% ( 1) 00:22:27.836 14821.218 - 14922.043: 0.9403% ( 4) 00:22:27.836 14922.043 - 15022.868: 1.0647% ( 9) 00:22:27.836 15022.868 - 15123.692: 1.2030% ( 10) 00:22:27.836 15123.692 - 15224.517: 1.4381% ( 17) 00:22:27.836 15224.517 - 15325.342: 1.7008% ( 19) 00:22:27.836 15325.342 - 15426.166: 2.3092% ( 44) 00:22:27.836 15426.166 - 15526.991: 2.8761% ( 41) 00:22:27.836 15526.991 - 15627.815: 3.8440% ( 70) 00:22:27.836 15627.815 - 15728.640: 4.8534% ( 73) 00:22:27.836 15728.640 - 15829.465: 6.2915% ( 104) 00:22:27.836 15829.465 - 15930.289: 8.2826% ( 144) 00:22:27.836 15930.289 - 16031.114: 10.5365% ( 163) 00:22:27.836 16031.114 - 16131.938: 12.9840% ( 177) 00:22:27.836 16131.938 - 16232.763: 15.8462% ( 207) 00:22:27.836 16232.763 - 16333.588: 18.8053% ( 214) 00:22:27.836 16333.588 - 16434.412: 21.7644% ( 214) 00:22:27.836 16434.412 - 16535.237: 25.4840% ( 269) 00:22:27.836 16535.237 - 16636.062: 29.7152% ( 306) 00:22:27.836 16636.062 - 16736.886: 33.8772% ( 301) 00:22:27.836 16736.886 - 16837.711: 37.8733% ( 289) 00:22:27.836 16837.711 - 16938.535: 41.6759% ( 275) 00:22:27.836 16938.535 - 17039.360: 45.2434% ( 258) 00:22:27.836 17039.360 - 17140.185: 48.8385% ( 260) 00:22:27.836 17140.185 - 17241.009: 52.1294% ( 238) 00:22:27.836 17241.009 - 17341.834: 55.4065% ( 237) 00:22:27.836 17341.834 - 17442.658: 58.6560% ( 235) 00:22:27.836 17442.658 - 17543.483: 61.5044% ( 206) 00:22:27.836 17543.483 - 17644.308: 64.3944% ( 209) 00:22:27.836 17644.308 - 17745.132: 67.2981% ( 210) 00:22:27.836 17745.132 - 17845.957: 69.8838% ( 187) 00:22:27.836 17845.957 - 17946.782: 72.2069% ( 168) 00:22:27.836 17946.782 - 18047.606: 74.4607% ( 163) 00:22:27.836 18047.606 - 18148.431: 76.4381% ( 143) 00:22:27.836 18148.431 - 18249.255: 78.2771% ( 133) 00:22:27.836 18249.255 - 18350.080: 79.9087% ( 118) 00:22:27.836 18350.080 - 18450.905: 81.3053% ( 101) 00:22:27.836 18450.905 - 18551.729: 82.6466% ( 97) 00:22:27.836 18551.729 - 18652.554: 83.8910% ( 90) 00:22:27.836 18652.554 - 18753.378: 84.7760% ( 64) 00:22:27.836 18753.378 - 18854.203: 85.6056% ( 60) 00:22:27.836 18854.203 - 18955.028: 86.3662% ( 55) 00:22:27.836 18955.028 - 19055.852: 86.9607% ( 43) 00:22:27.836 19055.852 - 19156.677: 87.5691% ( 44) 00:22:27.836 19156.677 - 19257.502: 88.2743% ( 51) 00:22:27.836 19257.502 - 19358.326: 88.8827% ( 44) 00:22:27.836 19358.326 - 19459.151: 89.5326% ( 47) 00:22:27.836 19459.151 - 19559.975: 90.3208% ( 57) 00:22:27.836 19559.975 - 19660.800: 90.7633% ( 32) 00:22:27.836 19660.800 - 19761.625: 91.1781% ( 30) 00:22:27.836 19761.625 - 19862.449: 91.5514% ( 27) 00:22:27.836 19862.449 - 19963.274: 91.9801% ( 31) 00:22:27.836 19963.274 - 20064.098: 92.4087% ( 31) 00:22:27.836 20064.098 - 20164.923: 92.8097% ( 29) 00:22:27.836 20164.923 - 20265.748: 93.1969% ( 28) 00:22:27.836 20265.748 - 20366.572: 93.4735% ( 20) 00:22:27.836 20366.572 - 20467.397: 93.7638% ( 21) 00:22:27.836 20467.397 - 20568.222: 94.0404% ( 20) 00:22:27.836 20568.222 - 20669.046: 94.3308% ( 21) 00:22:27.836 20669.046 - 20769.871: 94.5796% ( 18) 00:22:27.836 20769.871 - 20870.695: 94.7594% ( 13) 00:22:27.836 20870.695 - 20971.520: 95.0083% ( 18) 00:22:27.836 20971.520 - 21072.345: 95.2987% ( 21) 00:22:27.836 21072.345 - 21173.169: 95.5752% ( 20) 00:22:27.836 21173.169 - 21273.994: 95.8103% ( 17) 00:22:27.836 21273.994 - 21374.818: 96.0454% ( 17) 00:22:27.836 21374.818 - 21475.643: 96.2389% ( 14) 00:22:27.836 21475.643 - 21576.468: 96.4463% ( 15) 00:22:27.836 21576.468 - 21677.292: 96.6399% ( 14) 00:22:27.836 21677.292 - 21778.117: 96.7782% ( 10) 00:22:27.836 21778.117 - 21878.942: 96.9303% ( 11) 00:22:27.836 21878.942 - 21979.766: 97.3037% ( 27) 00:22:27.836 21979.766 - 22080.591: 97.5387% ( 17) 00:22:27.836 22080.591 - 22181.415: 97.6355% ( 7) 00:22:27.836 22181.415 - 22282.240: 97.7600% ( 9) 00:22:27.836 22282.240 - 22383.065: 97.8291% ( 5) 00:22:27.836 22383.065 - 22483.889: 97.8844% ( 4) 00:22:27.836 22483.889 - 22584.714: 97.9674% ( 6) 00:22:27.836 22584.714 - 22685.538: 98.0365% ( 5) 00:22:27.836 22685.538 - 22786.363: 98.1056% ( 5) 00:22:27.836 22786.363 - 22887.188: 98.1886% ( 6) 00:22:27.836 22887.188 - 22988.012: 98.2301% ( 3) 00:22:27.836 31457.280 - 31658.929: 98.2439% ( 1) 00:22:27.837 31658.929 - 31860.578: 98.3269% ( 6) 00:22:27.837 31860.578 - 32062.228: 98.4237% ( 7) 00:22:27.837 32062.228 - 32263.877: 98.5205% ( 7) 00:22:27.837 32263.877 - 32465.526: 98.6311% ( 8) 00:22:27.837 32465.526 - 32667.175: 98.7279% ( 7) 00:22:27.837 32667.175 - 32868.825: 98.8108% ( 6) 00:22:27.837 32868.825 - 33070.474: 98.9215% ( 8) 00:22:27.837 33070.474 - 33272.123: 99.0321% ( 8) 00:22:27.837 33272.123 - 33473.772: 99.1150% ( 6) 00:22:27.837 41943.040 - 42144.689: 99.1289% ( 1) 00:22:27.837 42144.689 - 42346.338: 99.2118% ( 6) 00:22:27.837 42346.338 - 42547.988: 99.3086% ( 7) 00:22:27.837 42547.988 - 42749.637: 99.3916% ( 6) 00:22:27.837 42749.637 - 42951.286: 99.4884% ( 7) 00:22:27.837 42951.286 - 43152.935: 99.5713% ( 6) 00:22:27.837 43152.935 - 43354.585: 99.6681% ( 7) 00:22:27.837 43354.585 - 43556.234: 99.7649% ( 7) 00:22:27.837 43556.234 - 43757.883: 99.8479% ( 6) 00:22:27.837 43757.883 - 43959.532: 99.9447% ( 7) 00:22:27.837 43959.532 - 44161.182: 100.0000% ( 4) 00:22:27.837 00:22:27.837 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:22:27.837 ============================================================================== 00:22:27.837 Range in us Cumulative IO count 00:22:27.837 12300.603 - 12351.015: 0.0137% ( 1) 00:22:27.837 12351.015 - 12401.428: 0.0411% ( 2) 00:22:27.837 12401.428 - 12451.840: 0.0685% ( 2) 00:22:27.837 12451.840 - 12502.252: 0.1096% ( 3) 00:22:27.837 12502.252 - 12552.665: 0.1371% ( 2) 00:22:27.837 12552.665 - 12603.077: 0.1782% ( 3) 00:22:27.837 12603.077 - 12653.489: 0.2056% ( 2) 00:22:27.837 12653.489 - 12703.902: 0.2193% ( 1) 00:22:27.837 12703.902 - 12754.314: 0.2604% ( 3) 00:22:27.837 12754.314 - 12804.726: 0.2878% ( 2) 00:22:27.837 12804.726 - 12855.138: 0.3289% ( 3) 00:22:27.837 12855.138 - 12905.551: 0.3564% ( 2) 00:22:27.837 12905.551 - 13006.375: 0.4249% ( 5) 00:22:27.837 13006.375 - 13107.200: 0.4797% ( 4) 00:22:27.837 13107.200 - 13208.025: 0.5482% ( 5) 00:22:27.837 13208.025 - 13308.849: 0.6031% ( 4) 00:22:27.837 13308.849 - 13409.674: 0.6579% ( 4) 00:22:27.837 13409.674 - 13510.498: 0.7264% ( 5) 00:22:27.837 13510.498 - 13611.323: 0.7812% ( 4) 00:22:27.837 13611.323 - 13712.148: 0.8498% ( 5) 00:22:27.837 13712.148 - 13812.972: 0.8772% ( 2) 00:22:27.837 14821.218 - 14922.043: 0.9046% ( 2) 00:22:27.837 14922.043 - 15022.868: 0.9868% ( 6) 00:22:27.837 15022.868 - 15123.692: 1.0828% ( 7) 00:22:27.837 15123.692 - 15224.517: 1.3021% ( 16) 00:22:27.837 15224.517 - 15325.342: 1.5625% ( 19) 00:22:27.837 15325.342 - 15426.166: 1.9052% ( 25) 00:22:27.837 15426.166 - 15526.991: 2.4260% ( 38) 00:22:27.837 15526.991 - 15627.815: 3.1524% ( 53) 00:22:27.837 15627.815 - 15728.640: 4.4682% ( 96) 00:22:27.837 15728.640 - 15829.465: 6.1952% ( 126) 00:22:27.837 15829.465 - 15930.289: 8.0866% ( 138) 00:22:27.837 15930.289 - 16031.114: 10.6223% ( 185) 00:22:27.837 16031.114 - 16131.938: 13.4868% ( 209) 00:22:27.837 16131.938 - 16232.763: 16.2966% ( 205) 00:22:27.837 16232.763 - 16333.588: 19.5724% ( 239) 00:22:27.837 16333.588 - 16434.412: 23.2593% ( 269) 00:22:27.837 16434.412 - 16535.237: 27.1519% ( 284) 00:22:27.837 16535.237 - 16636.062: 31.0170% ( 282) 00:22:27.837 16636.062 - 16736.886: 34.9918% ( 290) 00:22:27.837 16736.886 - 16837.711: 39.1173% ( 301) 00:22:27.837 16837.711 - 16938.535: 42.9139% ( 277) 00:22:27.837 16938.535 - 17039.360: 46.9161% ( 292) 00:22:27.837 17039.360 - 17140.185: 51.1376% ( 308) 00:22:27.837 17140.185 - 17241.009: 54.6464% ( 256) 00:22:27.837 17241.009 - 17341.834: 57.7577% ( 227) 00:22:27.837 17341.834 - 17442.658: 60.9238% ( 231) 00:22:27.837 17442.658 - 17543.483: 64.1036% ( 232) 00:22:27.837 17543.483 - 17644.308: 66.7489% ( 193) 00:22:27.837 17644.308 - 17745.132: 69.4216% ( 195) 00:22:27.837 17745.132 - 17845.957: 72.0395% ( 191) 00:22:27.837 17845.957 - 17946.782: 74.1776% ( 156) 00:22:27.837 17946.782 - 18047.606: 75.9868% ( 132) 00:22:27.837 18047.606 - 18148.431: 77.7412% ( 128) 00:22:27.837 18148.431 - 18249.255: 79.2215% ( 108) 00:22:27.837 18249.255 - 18350.080: 80.5373% ( 96) 00:22:27.837 18350.080 - 18450.905: 81.7023% ( 85) 00:22:27.837 18450.905 - 18551.729: 82.5795% ( 64) 00:22:27.837 18551.729 - 18652.554: 83.4019% ( 60) 00:22:27.837 18652.554 - 18753.378: 84.1009% ( 51) 00:22:27.837 18753.378 - 18854.203: 84.8547% ( 55) 00:22:27.837 18854.203 - 18955.028: 85.5400% ( 50) 00:22:27.837 18955.028 - 19055.852: 86.1157% ( 42) 00:22:27.837 19055.852 - 19156.677: 86.6365% ( 38) 00:22:27.837 19156.677 - 19257.502: 87.0477% ( 30) 00:22:27.837 19257.502 - 19358.326: 87.4452% ( 29) 00:22:27.837 19358.326 - 19459.151: 88.0071% ( 41) 00:22:27.837 19459.151 - 19559.975: 88.6513% ( 47) 00:22:27.837 19559.975 - 19660.800: 89.2407% ( 43) 00:22:27.837 19660.800 - 19761.625: 89.7889% ( 40) 00:22:27.837 19761.625 - 19862.449: 90.1864% ( 29) 00:22:27.837 19862.449 - 19963.274: 90.5702% ( 28) 00:22:27.837 19963.274 - 20064.098: 90.8991% ( 24) 00:22:27.837 20064.098 - 20164.923: 91.4337% ( 39) 00:22:27.837 20164.923 - 20265.748: 91.8448% ( 30) 00:22:27.837 20265.748 - 20366.572: 92.4068% ( 41) 00:22:27.837 20366.572 - 20467.397: 92.9688% ( 41) 00:22:27.837 20467.397 - 20568.222: 93.6266% ( 48) 00:22:27.837 20568.222 - 20669.046: 94.1338% ( 37) 00:22:27.837 20669.046 - 20769.871: 94.5861% ( 33) 00:22:27.837 20769.871 - 20870.695: 94.9424% ( 26) 00:22:27.837 20870.695 - 20971.520: 95.2851% ( 25) 00:22:27.837 20971.520 - 21072.345: 95.6552% ( 27) 00:22:27.837 21072.345 - 21173.169: 95.9978% ( 25) 00:22:27.837 21173.169 - 21273.994: 96.2993% ( 22) 00:22:27.837 21273.994 - 21374.818: 96.5872% ( 21) 00:22:27.837 21374.818 - 21475.643: 96.8476% ( 19) 00:22:27.837 21475.643 - 21576.468: 96.9298% ( 6) 00:22:27.837 21576.468 - 21677.292: 96.9984% ( 5) 00:22:27.837 21677.292 - 21778.117: 97.0532% ( 4) 00:22:27.837 21778.117 - 21878.942: 97.1080% ( 4) 00:22:27.837 21878.942 - 21979.766: 97.1765% ( 5) 00:22:27.837 21979.766 - 22080.591: 97.2451% ( 5) 00:22:27.837 22080.591 - 22181.415: 97.3136% ( 5) 00:22:27.837 22181.415 - 22282.240: 97.3684% ( 4) 00:22:27.837 22383.065 - 22483.889: 97.4232% ( 4) 00:22:27.837 22483.889 - 22584.714: 97.5877% ( 12) 00:22:27.837 22584.714 - 22685.538: 97.6700% ( 6) 00:22:27.837 22685.538 - 22786.363: 97.7111% ( 3) 00:22:27.837 22786.363 - 22887.188: 97.8070% ( 7) 00:22:27.837 22887.188 - 22988.012: 97.9304% ( 9) 00:22:27.837 22988.012 - 23088.837: 98.0263% ( 7) 00:22:27.837 23088.837 - 23189.662: 98.1360% ( 8) 00:22:27.837 23189.662 - 23290.486: 98.2593% ( 9) 00:22:27.837 23290.486 - 23391.311: 98.3690% ( 8) 00:22:27.837 23391.311 - 23492.135: 98.4923% ( 9) 00:22:27.837 23492.135 - 23592.960: 98.6157% ( 9) 00:22:27.837 23592.960 - 23693.785: 98.6842% ( 5) 00:22:27.837 23693.785 - 23794.609: 98.7253% ( 3) 00:22:27.837 23794.609 - 23895.434: 98.7802% ( 4) 00:22:27.837 23895.434 - 23996.258: 98.8213% ( 3) 00:22:27.837 23996.258 - 24097.083: 98.8761% ( 4) 00:22:27.837 24097.083 - 24197.908: 98.9309% ( 4) 00:22:27.837 24197.908 - 24298.732: 98.9720% ( 3) 00:22:27.837 24298.732 - 24399.557: 99.0132% ( 3) 00:22:27.837 24399.557 - 24500.382: 99.0680% ( 4) 00:22:27.837 24500.382 - 24601.206: 99.1091% ( 3) 00:22:27.837 24601.206 - 24702.031: 99.1228% ( 1) 00:22:27.837 31457.280 - 31658.929: 99.1913% ( 5) 00:22:27.837 31658.929 - 31860.578: 99.2873% ( 7) 00:22:27.837 31860.578 - 32062.228: 99.3969% ( 8) 00:22:27.837 32062.228 - 32263.877: 99.4929% ( 7) 00:22:27.837 32263.877 - 32465.526: 99.5888% ( 7) 00:22:27.837 32465.526 - 32667.175: 99.6848% ( 7) 00:22:27.837 32667.175 - 32868.825: 99.7807% ( 7) 00:22:27.837 32868.825 - 33070.474: 99.8766% ( 7) 00:22:27.837 33070.474 - 33272.123: 99.9863% ( 8) 00:22:27.837 33272.123 - 33473.772: 100.0000% ( 1) 00:22:27.837 00:22:27.837 23:04:06 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:22:29.223 Initializing NVMe Controllers 00:22:29.223 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:22:29.223 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:22:29.223 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:22:29.223 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:22:29.223 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:22:29.223 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:22:29.223 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:22:29.223 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:22:29.223 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:22:29.223 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:22:29.223 Initialization complete. Launching workers. 00:22:29.223 ======================================================== 00:22:29.223 Latency(us) 00:22:29.223 Device Information : IOPS MiB/s Average min max 00:22:29.223 PCIE (0000:00:11.0) NSID 1 from core 0: 7156.99 83.87 17920.07 13132.88 45910.40 00:22:29.223 PCIE (0000:00:13.0) NSID 1 from core 0: 7156.99 83.87 17889.27 13188.95 44703.87 00:22:29.223 PCIE (0000:00:10.0) NSID 1 from core 0: 7156.99 83.87 17854.49 13236.29 43525.38 00:22:29.223 PCIE (0000:00:12.0) NSID 1 from core 0: 7156.99 83.87 17820.08 12877.61 41405.45 00:22:29.223 PCIE (0000:00:12.0) NSID 2 from core 0: 7156.99 83.87 17787.19 11374.20 41082.36 00:22:29.223 PCIE (0000:00:12.0) NSID 3 from core 0: 7220.89 84.62 17596.76 10791.75 30736.44 00:22:29.223 ======================================================== 00:22:29.223 Total : 43005.81 503.97 17810.99 10791.75 45910.40 00:22:29.223 00:22:29.223 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:22:29.223 ================================================================================= 00:22:29.223 1.00000% : 13913.797us 00:22:29.223 10.00000% : 15526.991us 00:22:29.223 25.00000% : 16232.763us 00:22:29.223 50.00000% : 17341.834us 00:22:29.223 75.00000% : 18955.028us 00:22:29.223 90.00000% : 20064.098us 00:22:29.223 95.00000% : 20769.871us 00:22:29.223 98.00000% : 22584.714us 00:22:29.223 99.00000% : 36901.809us 00:22:29.223 99.50000% : 43959.532us 00:22:29.223 99.90000% : 45774.375us 00:22:29.223 99.99000% : 45976.025us 00:22:29.223 99.99900% : 45976.025us 00:22:29.223 99.99990% : 45976.025us 00:22:29.223 99.99999% : 45976.025us 00:22:29.223 00:22:29.223 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:22:29.223 ================================================================================= 00:22:29.223 1.00000% : 13913.797us 00:22:29.223 10.00000% : 15526.991us 00:22:29.223 25.00000% : 16232.763us 00:22:29.223 50.00000% : 17442.658us 00:22:29.223 75.00000% : 19055.852us 00:22:29.223 90.00000% : 20064.098us 00:22:29.223 95.00000% : 20769.871us 00:22:29.223 98.00000% : 22584.714us 00:22:29.223 99.00000% : 34683.668us 00:22:29.223 99.50000% : 43757.883us 00:22:29.223 99.90000% : 44564.480us 00:22:29.223 99.99000% : 44766.129us 00:22:29.223 99.99900% : 44766.129us 00:22:29.223 99.99990% : 44766.129us 00:22:29.223 99.99999% : 44766.129us 00:22:29.223 00:22:29.223 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:22:29.223 ================================================================================= 00:22:29.223 1.00000% : 13812.972us 00:22:29.223 10.00000% : 15426.166us 00:22:29.223 25.00000% : 16232.763us 00:22:29.223 50.00000% : 17442.658us 00:22:29.223 75.00000% : 18955.028us 00:22:29.223 90.00000% : 20164.923us 00:22:29.223 95.00000% : 20971.520us 00:22:29.223 98.00000% : 22584.714us 00:22:29.223 99.00000% : 33272.123us 00:22:29.223 99.50000% : 42346.338us 00:22:29.223 99.90000% : 43354.585us 00:22:29.223 99.99000% : 43556.234us 00:22:29.223 99.99900% : 43556.234us 00:22:29.223 99.99990% : 43556.234us 00:22:29.223 99.99999% : 43556.234us 00:22:29.223 00:22:29.223 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:22:29.223 ================================================================================= 00:22:29.223 1.00000% : 13510.498us 00:22:29.223 10.00000% : 15426.166us 00:22:29.223 25.00000% : 16232.763us 00:22:29.223 50.00000% : 17442.658us 00:22:29.223 75.00000% : 18955.028us 00:22:29.223 90.00000% : 20164.923us 00:22:29.223 95.00000% : 21173.169us 00:22:29.223 98.00000% : 22282.240us 00:22:29.223 99.00000% : 31255.631us 00:22:29.223 99.50000% : 40531.495us 00:22:29.223 99.90000% : 41338.092us 00:22:29.223 99.99000% : 41539.742us 00:22:29.223 99.99900% : 41539.742us 00:22:29.223 99.99990% : 41539.742us 00:22:29.223 99.99999% : 41539.742us 00:22:29.223 00:22:29.223 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:22:29.223 ================================================================================= 00:22:29.223 1.00000% : 13208.025us 00:22:29.223 10.00000% : 15426.166us 00:22:29.223 25.00000% : 16131.938us 00:22:29.223 50.00000% : 17341.834us 00:22:29.223 75.00000% : 18955.028us 00:22:29.223 90.00000% : 20064.098us 00:22:29.223 95.00000% : 21173.169us 00:22:29.223 98.00000% : 22282.240us 00:22:29.223 99.00000% : 31053.982us 00:22:29.223 99.50000% : 40128.197us 00:22:29.223 99.90000% : 40934.794us 00:22:29.223 99.99000% : 41136.443us 00:22:29.223 99.99900% : 41136.443us 00:22:29.223 99.99990% : 41136.443us 00:22:29.223 99.99999% : 41136.443us 00:22:29.223 00:22:29.223 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:22:29.223 ================================================================================= 00:22:29.223 1.00000% : 13409.674us 00:22:29.223 10.00000% : 15426.166us 00:22:29.224 25.00000% : 16131.938us 00:22:29.224 50.00000% : 17341.834us 00:22:29.224 75.00000% : 18854.203us 00:22:29.224 90.00000% : 20164.923us 00:22:29.224 95.00000% : 20870.695us 00:22:29.224 98.00000% : 22080.591us 00:22:29.224 99.00000% : 22786.363us 00:22:29.224 99.50000% : 29844.086us 00:22:29.224 99.90000% : 30650.683us 00:22:29.224 99.99000% : 30852.332us 00:22:29.224 99.99900% : 30852.332us 00:22:29.224 99.99990% : 30852.332us 00:22:29.224 99.99999% : 30852.332us 00:22:29.224 00:22:29.224 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:22:29.224 ============================================================================== 00:22:29.224 Range in us Cumulative IO count 00:22:29.224 13107.200 - 13208.025: 0.0558% ( 4) 00:22:29.224 13208.025 - 13308.849: 0.1256% ( 5) 00:22:29.224 13308.849 - 13409.674: 0.2093% ( 6) 00:22:29.224 13409.674 - 13510.498: 0.3209% ( 8) 00:22:29.224 13510.498 - 13611.323: 0.5301% ( 15) 00:22:29.224 13611.323 - 13712.148: 0.7394% ( 15) 00:22:29.224 13712.148 - 13812.972: 0.9487% ( 15) 00:22:29.224 13812.972 - 13913.797: 1.2556% ( 22) 00:22:29.224 13913.797 - 14014.622: 1.8136% ( 40) 00:22:29.224 14014.622 - 14115.446: 2.2321% ( 30) 00:22:29.224 14115.446 - 14216.271: 2.6088% ( 27) 00:22:29.224 14216.271 - 14317.095: 3.4180% ( 58) 00:22:29.224 14317.095 - 14417.920: 3.9900% ( 41) 00:22:29.224 14417.920 - 14518.745: 4.4782% ( 35) 00:22:29.224 14518.745 - 14619.569: 4.8410% ( 26) 00:22:29.224 14619.569 - 14720.394: 5.0642% ( 16) 00:22:29.224 14720.394 - 14821.218: 5.1897% ( 9) 00:22:29.224 14821.218 - 14922.043: 5.3153% ( 9) 00:22:29.224 14922.043 - 15022.868: 5.5385% ( 16) 00:22:29.224 15022.868 - 15123.692: 5.9012% ( 26) 00:22:29.224 15123.692 - 15224.517: 6.6964% ( 57) 00:22:29.224 15224.517 - 15325.342: 7.6869% ( 71) 00:22:29.224 15325.342 - 15426.166: 8.6217% ( 67) 00:22:29.224 15426.166 - 15526.991: 10.2958% ( 120) 00:22:29.224 15526.991 - 15627.815: 11.7327% ( 103) 00:22:29.224 15627.815 - 15728.640: 14.1602% ( 174) 00:22:29.224 15728.640 - 15829.465: 16.6853% ( 181) 00:22:29.224 15829.465 - 15930.289: 19.1546% ( 177) 00:22:29.224 15930.289 - 16031.114: 21.6099% ( 176) 00:22:29.224 16031.114 - 16131.938: 23.6886% ( 149) 00:22:29.224 16131.938 - 16232.763: 25.6836% ( 143) 00:22:29.224 16232.763 - 16333.588: 28.0273% ( 168) 00:22:29.224 16333.588 - 16434.412: 30.4129% ( 171) 00:22:29.224 16434.412 - 16535.237: 32.7567% ( 168) 00:22:29.224 16535.237 - 16636.062: 35.7561% ( 215) 00:22:29.224 16636.062 - 16736.886: 38.1417% ( 171) 00:22:29.224 16736.886 - 16837.711: 40.7366% ( 186) 00:22:29.224 16837.711 - 16938.535: 42.6897% ( 140) 00:22:29.224 16938.535 - 17039.360: 44.5312% ( 132) 00:22:29.224 17039.360 - 17140.185: 47.0006% ( 177) 00:22:29.224 17140.185 - 17241.009: 48.8142% ( 130) 00:22:29.224 17241.009 - 17341.834: 50.3209% ( 108) 00:22:29.224 17341.834 - 17442.658: 51.7160% ( 100) 00:22:29.224 17442.658 - 17543.483: 52.8739% ( 83) 00:22:29.224 17543.483 - 17644.308: 54.0039% ( 81) 00:22:29.224 17644.308 - 17745.132: 55.2455% ( 89) 00:22:29.224 17745.132 - 17845.957: 57.1847% ( 139) 00:22:29.224 17845.957 - 17946.782: 58.9565% ( 127) 00:22:29.224 17946.782 - 18047.606: 60.8956% ( 139) 00:22:29.224 18047.606 - 18148.431: 62.7930% ( 136) 00:22:29.224 18148.431 - 18249.255: 64.2020% ( 101) 00:22:29.224 18249.255 - 18350.080: 65.4297% ( 88) 00:22:29.224 18350.080 - 18450.905: 66.6713% ( 89) 00:22:29.224 18450.905 - 18551.729: 67.8432% ( 84) 00:22:29.224 18551.729 - 18652.554: 69.4336% ( 114) 00:22:29.224 18652.554 - 18753.378: 71.5960% ( 155) 00:22:29.224 18753.378 - 18854.203: 73.7723% ( 156) 00:22:29.224 18854.203 - 18955.028: 75.4883% ( 123) 00:22:29.224 18955.028 - 19055.852: 77.4275% ( 139) 00:22:29.224 19055.852 - 19156.677: 79.1016% ( 120) 00:22:29.224 19156.677 - 19257.502: 80.2874% ( 85) 00:22:29.224 19257.502 - 19358.326: 81.3058% ( 73) 00:22:29.224 19358.326 - 19459.151: 82.6590% ( 97) 00:22:29.224 19459.151 - 19559.975: 83.8309% ( 84) 00:22:29.224 19559.975 - 19660.800: 85.2121% ( 99) 00:22:29.224 19660.800 - 19761.625: 86.4258% ( 87) 00:22:29.224 19761.625 - 19862.449: 87.7093% ( 92) 00:22:29.224 19862.449 - 19963.274: 89.2997% ( 114) 00:22:29.224 19963.274 - 20064.098: 90.4018% ( 79) 00:22:29.224 20064.098 - 20164.923: 91.3365% ( 67) 00:22:29.224 20164.923 - 20265.748: 92.2573% ( 66) 00:22:29.224 20265.748 - 20366.572: 93.0664% ( 58) 00:22:29.224 20366.572 - 20467.397: 93.7919% ( 52) 00:22:29.224 20467.397 - 20568.222: 94.4615% ( 48) 00:22:29.224 20568.222 - 20669.046: 94.9358% ( 34) 00:22:29.224 20669.046 - 20769.871: 95.4381% ( 36) 00:22:29.224 20769.871 - 20870.695: 95.6613% ( 16) 00:22:29.224 20870.695 - 20971.520: 95.9124% ( 18) 00:22:29.224 20971.520 - 21072.345: 96.0240% ( 8) 00:22:29.224 21072.345 - 21173.169: 96.1356% ( 8) 00:22:29.224 21173.169 - 21273.994: 96.2751% ( 10) 00:22:29.224 21273.994 - 21374.818: 96.4146% ( 10) 00:22:29.224 21374.818 - 21475.643: 96.5262% ( 8) 00:22:29.224 21475.643 - 21576.468: 96.6099% ( 6) 00:22:29.224 21576.468 - 21677.292: 96.6797% ( 5) 00:22:29.224 21677.292 - 21778.117: 96.8610% ( 13) 00:22:29.224 21778.117 - 21878.942: 97.0703% ( 15) 00:22:29.224 21878.942 - 21979.766: 97.3912% ( 23) 00:22:29.224 21979.766 - 22080.591: 97.5586% ( 12) 00:22:29.224 22080.591 - 22181.415: 97.6702% ( 8) 00:22:29.224 22181.415 - 22282.240: 97.8097% ( 10) 00:22:29.224 22282.240 - 22383.065: 97.8934% ( 6) 00:22:29.224 22383.065 - 22483.889: 97.9771% ( 6) 00:22:29.224 22483.889 - 22584.714: 98.0608% ( 6) 00:22:29.224 22584.714 - 22685.538: 98.1306% ( 5) 00:22:29.224 22685.538 - 22786.363: 98.2003% ( 5) 00:22:29.224 22786.363 - 22887.188: 98.2143% ( 1) 00:22:29.224 35288.615 - 35490.265: 98.2422% ( 2) 00:22:29.224 35490.265 - 35691.914: 98.3817% ( 10) 00:22:29.224 35691.914 - 35893.563: 98.5212% ( 10) 00:22:29.224 35893.563 - 36095.212: 98.6189% ( 7) 00:22:29.224 36095.212 - 36296.862: 98.7305% ( 8) 00:22:29.224 36296.862 - 36498.511: 98.8281% ( 7) 00:22:29.224 36498.511 - 36700.160: 98.9258% ( 7) 00:22:29.224 36700.160 - 36901.809: 99.0095% ( 6) 00:22:29.224 36901.809 - 37103.458: 99.1071% ( 7) 00:22:29.224 43354.585 - 43556.234: 99.1629% ( 4) 00:22:29.224 43556.234 - 43757.883: 99.4420% ( 20) 00:22:29.224 43757.883 - 43959.532: 99.5257% ( 6) 00:22:29.224 44766.129 - 44967.778: 99.5536% ( 2) 00:22:29.224 44967.778 - 45169.428: 99.6512% ( 7) 00:22:29.224 45169.428 - 45371.077: 99.7489% ( 7) 00:22:29.224 45371.077 - 45572.726: 99.8465% ( 7) 00:22:29.224 45572.726 - 45774.375: 99.9442% ( 7) 00:22:29.224 45774.375 - 45976.025: 100.0000% ( 4) 00:22:29.224 00:22:29.224 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:22:29.224 ============================================================================== 00:22:29.224 Range in us Cumulative IO count 00:22:29.224 13107.200 - 13208.025: 0.0140% ( 1) 00:22:29.224 13208.025 - 13308.849: 0.0837% ( 5) 00:22:29.224 13308.849 - 13409.674: 0.1674% ( 6) 00:22:29.224 13409.674 - 13510.498: 0.2372% ( 5) 00:22:29.224 13510.498 - 13611.323: 0.3348% ( 7) 00:22:29.224 13611.323 - 13712.148: 0.6696% ( 24) 00:22:29.224 13712.148 - 13812.972: 0.8650% ( 14) 00:22:29.224 13812.972 - 13913.797: 1.0882% ( 16) 00:22:29.224 13913.797 - 14014.622: 1.2974% ( 15) 00:22:29.224 14014.622 - 14115.446: 1.7997% ( 36) 00:22:29.224 14115.446 - 14216.271: 2.2740% ( 34) 00:22:29.224 14216.271 - 14317.095: 2.6088% ( 24) 00:22:29.224 14317.095 - 14417.920: 2.9297% ( 23) 00:22:29.224 14417.920 - 14518.745: 3.4738% ( 39) 00:22:29.224 14518.745 - 14619.569: 3.9342% ( 33) 00:22:29.224 14619.569 - 14720.394: 4.3248% ( 28) 00:22:29.224 14720.394 - 14821.218: 4.8410% ( 37) 00:22:29.224 14821.218 - 14922.043: 5.2316% ( 28) 00:22:29.224 14922.043 - 15022.868: 5.7059% ( 34) 00:22:29.224 15022.868 - 15123.692: 5.9570% ( 18) 00:22:29.224 15123.692 - 15224.517: 6.5151% ( 40) 00:22:29.224 15224.517 - 15325.342: 7.3661% ( 61) 00:22:29.224 15325.342 - 15426.166: 8.8030% ( 103) 00:22:29.224 15426.166 - 15526.991: 10.7840% ( 142) 00:22:29.224 15526.991 - 15627.815: 12.8767% ( 150) 00:22:29.224 15627.815 - 15728.640: 14.7740% ( 136) 00:22:29.224 15728.640 - 15829.465: 17.2154% ( 175) 00:22:29.224 15829.465 - 15930.289: 19.6987% ( 178) 00:22:29.224 15930.289 - 16031.114: 21.9308% ( 160) 00:22:29.224 16031.114 - 16131.938: 24.2048% ( 163) 00:22:29.224 16131.938 - 16232.763: 26.2277% ( 145) 00:22:29.224 16232.763 - 16333.588: 28.7946% ( 184) 00:22:29.224 16333.588 - 16434.412: 31.8778% ( 221) 00:22:29.224 16434.412 - 16535.237: 34.7935% ( 209) 00:22:29.224 16535.237 - 16636.062: 37.3186% ( 181) 00:22:29.224 16636.062 - 16736.886: 39.0765% ( 126) 00:22:29.224 16736.886 - 16837.711: 40.8203% ( 125) 00:22:29.224 16837.711 - 16938.535: 42.6339% ( 130) 00:22:29.224 16938.535 - 17039.360: 44.1127% ( 106) 00:22:29.224 17039.360 - 17140.185: 45.6334% ( 109) 00:22:29.224 17140.185 - 17241.009: 47.2796% ( 118) 00:22:29.224 17241.009 - 17341.834: 48.9676% ( 121) 00:22:29.224 17341.834 - 17442.658: 50.9626% ( 143) 00:22:29.224 17442.658 - 17543.483: 52.6367% ( 120) 00:22:29.224 17543.483 - 17644.308: 54.6596% ( 145) 00:22:29.224 17644.308 - 17745.132: 56.5011% ( 132) 00:22:29.224 17745.132 - 17845.957: 58.1194% ( 116) 00:22:29.224 17845.957 - 17946.782: 59.5843% ( 105) 00:22:29.224 17946.782 - 18047.606: 61.1607% ( 113) 00:22:29.224 18047.606 - 18148.431: 62.7232% ( 112) 00:22:29.224 18148.431 - 18249.255: 63.7974% ( 77) 00:22:29.224 18249.255 - 18350.080: 64.8577% ( 76) 00:22:29.224 18350.080 - 18450.905: 65.9040% ( 75) 00:22:29.224 18450.905 - 18551.729: 67.1596% ( 90) 00:22:29.224 18551.729 - 18652.554: 68.6942% ( 110) 00:22:29.224 18652.554 - 18753.378: 70.8984% ( 158) 00:22:29.224 18753.378 - 18854.203: 73.0469% ( 154) 00:22:29.224 18854.203 - 18955.028: 74.9442% ( 136) 00:22:29.224 18955.028 - 19055.852: 77.0508% ( 151) 00:22:29.225 19055.852 - 19156.677: 78.7528% ( 122) 00:22:29.225 19156.677 - 19257.502: 80.5385% ( 128) 00:22:29.225 19257.502 - 19358.326: 81.9475% ( 101) 00:22:29.225 19358.326 - 19459.151: 83.8030% ( 133) 00:22:29.225 19459.151 - 19559.975: 85.1702% ( 98) 00:22:29.225 19559.975 - 19660.800: 86.3839% ( 87) 00:22:29.225 19660.800 - 19761.625: 87.5977% ( 87) 00:22:29.225 19761.625 - 19862.449: 88.7835% ( 85) 00:22:29.225 19862.449 - 19963.274: 89.7321% ( 68) 00:22:29.225 19963.274 - 20064.098: 90.6948% ( 69) 00:22:29.225 20064.098 - 20164.923: 91.5039% ( 58) 00:22:29.225 20164.923 - 20265.748: 92.2154% ( 51) 00:22:29.225 20265.748 - 20366.572: 93.0246% ( 58) 00:22:29.225 20366.572 - 20467.397: 93.8337% ( 58) 00:22:29.225 20467.397 - 20568.222: 94.3359% ( 36) 00:22:29.225 20568.222 - 20669.046: 94.6847% ( 25) 00:22:29.225 20669.046 - 20769.871: 95.0195% ( 24) 00:22:29.225 20769.871 - 20870.695: 95.3125% ( 21) 00:22:29.225 20870.695 - 20971.520: 95.5497% ( 17) 00:22:29.225 20971.520 - 21072.345: 95.9124% ( 26) 00:22:29.225 21072.345 - 21173.169: 96.2612% ( 25) 00:22:29.225 21173.169 - 21273.994: 96.4146% ( 11) 00:22:29.225 21273.994 - 21374.818: 96.5681% ( 11) 00:22:29.225 21374.818 - 21475.643: 96.6657% ( 7) 00:22:29.225 21475.643 - 21576.468: 96.7773% ( 8) 00:22:29.225 21576.468 - 21677.292: 96.9169% ( 10) 00:22:29.225 21677.292 - 21778.117: 97.0285% ( 8) 00:22:29.225 21778.117 - 21878.942: 97.0982% ( 5) 00:22:29.225 21878.942 - 21979.766: 97.2517% ( 11) 00:22:29.225 21979.766 - 22080.591: 97.3912% ( 10) 00:22:29.225 22080.591 - 22181.415: 97.5865% ( 14) 00:22:29.225 22181.415 - 22282.240: 97.7539% ( 12) 00:22:29.225 22282.240 - 22383.065: 97.8516% ( 7) 00:22:29.225 22383.065 - 22483.889: 97.9213% ( 5) 00:22:29.225 22483.889 - 22584.714: 98.0050% ( 6) 00:22:29.225 22584.714 - 22685.538: 98.0748% ( 5) 00:22:29.225 22685.538 - 22786.363: 98.1445% ( 5) 00:22:29.225 22786.363 - 22887.188: 98.2143% ( 5) 00:22:29.225 32868.825 - 33070.474: 98.2561% ( 3) 00:22:29.225 33070.474 - 33272.123: 98.3538% ( 7) 00:22:29.225 33272.123 - 33473.772: 98.4515% ( 7) 00:22:29.225 33473.772 - 33675.422: 98.5491% ( 7) 00:22:29.225 33675.422 - 33877.071: 98.6468% ( 7) 00:22:29.225 33877.071 - 34078.720: 98.7444% ( 7) 00:22:29.225 34078.720 - 34280.369: 98.8421% ( 7) 00:22:29.225 34280.369 - 34482.018: 98.9397% ( 7) 00:22:29.225 34482.018 - 34683.668: 99.0374% ( 7) 00:22:29.225 34683.668 - 34885.317: 99.1071% ( 5) 00:22:29.225 42749.637 - 42951.286: 99.1769% ( 5) 00:22:29.225 42951.286 - 43152.935: 99.2467% ( 5) 00:22:29.225 43152.935 - 43354.585: 99.3304% ( 6) 00:22:29.225 43354.585 - 43556.234: 99.4420% ( 8) 00:22:29.225 43556.234 - 43757.883: 99.5396% ( 7) 00:22:29.225 43757.883 - 43959.532: 99.6373% ( 7) 00:22:29.225 43959.532 - 44161.182: 99.7210% ( 6) 00:22:29.225 44161.182 - 44362.831: 99.8326% ( 8) 00:22:29.225 44362.831 - 44564.480: 99.9163% ( 6) 00:22:29.225 44564.480 - 44766.129: 100.0000% ( 6) 00:22:29.225 00:22:29.225 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:22:29.225 ============================================================================== 00:22:29.225 Range in us Cumulative IO count 00:22:29.225 13208.025 - 13308.849: 0.0698% ( 5) 00:22:29.225 13308.849 - 13409.674: 0.1256% ( 4) 00:22:29.225 13409.674 - 13510.498: 0.3906% ( 19) 00:22:29.225 13510.498 - 13611.323: 0.6975% ( 22) 00:22:29.225 13611.323 - 13712.148: 0.9068% ( 15) 00:22:29.225 13712.148 - 13812.972: 1.1858% ( 20) 00:22:29.225 13812.972 - 13913.797: 1.2835% ( 7) 00:22:29.225 13913.797 - 14014.622: 1.4230% ( 10) 00:22:29.225 14014.622 - 14115.446: 1.7718% ( 25) 00:22:29.225 14115.446 - 14216.271: 2.3438% ( 41) 00:22:29.225 14216.271 - 14317.095: 2.8460% ( 36) 00:22:29.225 14317.095 - 14417.920: 3.0692% ( 16) 00:22:29.225 14417.920 - 14518.745: 3.2227% ( 11) 00:22:29.225 14518.745 - 14619.569: 3.5156% ( 21) 00:22:29.225 14619.569 - 14720.394: 4.0318% ( 37) 00:22:29.225 14720.394 - 14821.218: 4.8828% ( 61) 00:22:29.225 14821.218 - 14922.043: 5.5525% ( 48) 00:22:29.225 14922.043 - 15022.868: 6.0268% ( 34) 00:22:29.225 15022.868 - 15123.692: 6.7522% ( 52) 00:22:29.225 15123.692 - 15224.517: 7.8823% ( 81) 00:22:29.225 15224.517 - 15325.342: 9.5703% ( 121) 00:22:29.225 15325.342 - 15426.166: 11.4118% ( 132) 00:22:29.225 15426.166 - 15526.991: 13.0720% ( 119) 00:22:29.225 15526.991 - 15627.815: 14.8717% ( 129) 00:22:29.225 15627.815 - 15728.640: 17.0480% ( 156) 00:22:29.225 15728.640 - 15829.465: 18.8058% ( 126) 00:22:29.225 15829.465 - 15930.289: 20.7868% ( 142) 00:22:29.225 15930.289 - 16031.114: 22.1680% ( 99) 00:22:29.225 16031.114 - 16131.938: 24.0513% ( 135) 00:22:29.225 16131.938 - 16232.763: 26.6044% ( 183) 00:22:29.225 16232.763 - 16333.588: 28.9202% ( 166) 00:22:29.225 16333.588 - 16434.412: 31.0128% ( 150) 00:22:29.225 16434.412 - 16535.237: 33.0636% ( 147) 00:22:29.225 16535.237 - 16636.062: 34.8772% ( 130) 00:22:29.225 16636.062 - 16736.886: 36.6211% ( 125) 00:22:29.225 16736.886 - 16837.711: 38.9788% ( 169) 00:22:29.225 16837.711 - 16938.535: 41.3504% ( 170) 00:22:29.225 16938.535 - 17039.360: 43.0525% ( 122) 00:22:29.225 17039.360 - 17140.185: 45.1311% ( 149) 00:22:29.225 17140.185 - 17241.009: 46.7913% ( 119) 00:22:29.225 17241.009 - 17341.834: 48.9258% ( 153) 00:22:29.225 17341.834 - 17442.658: 51.3672% ( 175) 00:22:29.225 17442.658 - 17543.483: 53.7667% ( 172) 00:22:29.225 17543.483 - 17644.308: 55.3711% ( 115) 00:22:29.225 17644.308 - 17745.132: 57.2824% ( 137) 00:22:29.225 17745.132 - 17845.957: 58.9007% ( 116) 00:22:29.225 17845.957 - 17946.782: 60.2818% ( 99) 00:22:29.225 17946.782 - 18047.606: 62.0117% ( 124) 00:22:29.225 18047.606 - 18148.431: 63.9090% ( 136) 00:22:29.225 18148.431 - 18249.255: 65.5134% ( 115) 00:22:29.225 18249.255 - 18350.080: 66.8806% ( 98) 00:22:29.225 18350.080 - 18450.905: 68.2338% ( 97) 00:22:29.225 18450.905 - 18551.729: 69.9219% ( 121) 00:22:29.225 18551.729 - 18652.554: 71.3728% ( 104) 00:22:29.225 18652.554 - 18753.378: 73.1166% ( 125) 00:22:29.225 18753.378 - 18854.203: 74.8047% ( 121) 00:22:29.225 18854.203 - 18955.028: 76.1858% ( 99) 00:22:29.225 18955.028 - 19055.852: 77.4275% ( 89) 00:22:29.225 19055.852 - 19156.677: 78.6133% ( 85) 00:22:29.225 19156.677 - 19257.502: 79.8270% ( 87) 00:22:29.225 19257.502 - 19358.326: 81.2360% ( 101) 00:22:29.225 19358.326 - 19459.151: 82.4916% ( 90) 00:22:29.225 19459.151 - 19559.975: 83.8867% ( 100) 00:22:29.225 19559.975 - 19660.800: 85.0586% ( 84) 00:22:29.225 19660.800 - 19761.625: 86.3142% ( 90) 00:22:29.225 19761.625 - 19862.449: 87.2489% ( 67) 00:22:29.225 19862.449 - 19963.274: 88.2394% ( 71) 00:22:29.225 19963.274 - 20064.098: 89.3415% ( 79) 00:22:29.225 20064.098 - 20164.923: 90.3460% ( 72) 00:22:29.225 20164.923 - 20265.748: 91.1551% ( 58) 00:22:29.225 20265.748 - 20366.572: 91.8527% ( 50) 00:22:29.225 20366.572 - 20467.397: 92.4944% ( 46) 00:22:29.225 20467.397 - 20568.222: 93.1641% ( 48) 00:22:29.225 20568.222 - 20669.046: 93.9314% ( 55) 00:22:29.225 20669.046 - 20769.871: 94.3917% ( 33) 00:22:29.225 20769.871 - 20870.695: 94.7126% ( 23) 00:22:29.225 20870.695 - 20971.520: 95.1032% ( 28) 00:22:29.225 20971.520 - 21072.345: 95.3125% ( 15) 00:22:29.225 21072.345 - 21173.169: 95.7031% ( 28) 00:22:29.225 21173.169 - 21273.994: 96.0798% ( 27) 00:22:29.225 21273.994 - 21374.818: 96.3170% ( 17) 00:22:29.225 21374.818 - 21475.643: 96.5402% ( 16) 00:22:29.225 21475.643 - 21576.468: 96.7215% ( 13) 00:22:29.225 21576.468 - 21677.292: 97.0424% ( 23) 00:22:29.225 21677.292 - 21778.117: 97.2377% ( 14) 00:22:29.225 21778.117 - 21878.942: 97.3912% ( 11) 00:22:29.225 21878.942 - 21979.766: 97.5167% ( 9) 00:22:29.225 21979.766 - 22080.591: 97.6423% ( 9) 00:22:29.225 22080.591 - 22181.415: 97.7539% ( 8) 00:22:29.225 22181.415 - 22282.240: 97.8516% ( 7) 00:22:29.225 22282.240 - 22383.065: 97.8934% ( 3) 00:22:29.225 22383.065 - 22483.889: 97.9771% ( 6) 00:22:29.225 22483.889 - 22584.714: 98.0748% ( 7) 00:22:29.225 22584.714 - 22685.538: 98.1166% ( 3) 00:22:29.225 22685.538 - 22786.363: 98.1585% ( 3) 00:22:29.225 22786.363 - 22887.188: 98.2003% ( 3) 00:22:29.225 22887.188 - 22988.012: 98.2143% ( 1) 00:22:29.225 31255.631 - 31457.280: 98.2561% ( 3) 00:22:29.225 31457.280 - 31658.929: 98.3119% ( 4) 00:22:29.225 31658.929 - 31860.578: 98.4235% ( 8) 00:22:29.225 31860.578 - 32062.228: 98.5073% ( 6) 00:22:29.225 32062.228 - 32263.877: 98.5910% ( 6) 00:22:29.225 32263.877 - 32465.526: 98.6607% ( 5) 00:22:29.225 32465.526 - 32667.175: 98.7723% ( 8) 00:22:29.225 32667.175 - 32868.825: 98.8560% ( 6) 00:22:29.225 32868.825 - 33070.474: 98.9258% ( 5) 00:22:29.225 33070.474 - 33272.123: 99.0374% ( 8) 00:22:29.225 33272.123 - 33473.772: 99.1071% ( 5) 00:22:29.225 41136.443 - 41338.092: 99.1211% ( 1) 00:22:29.225 41338.092 - 41539.742: 99.1908% ( 5) 00:22:29.225 41539.742 - 41741.391: 99.2467% ( 4) 00:22:29.225 41741.391 - 41943.040: 99.3443% ( 7) 00:22:29.225 41943.040 - 42144.689: 99.4280% ( 6) 00:22:29.225 42144.689 - 42346.338: 99.5257% ( 7) 00:22:29.225 42346.338 - 42547.988: 99.6094% ( 6) 00:22:29.225 42547.988 - 42749.637: 99.6791% ( 5) 00:22:29.225 42749.637 - 42951.286: 99.7628% ( 6) 00:22:29.225 42951.286 - 43152.935: 99.8605% ( 7) 00:22:29.225 43152.935 - 43354.585: 99.9302% ( 5) 00:22:29.225 43354.585 - 43556.234: 100.0000% ( 5) 00:22:29.225 00:22:29.225 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:22:29.225 ============================================================================== 00:22:29.225 Range in us Cumulative IO count 00:22:29.225 12855.138 - 12905.551: 0.0140% ( 1) 00:22:29.225 12905.551 - 13006.375: 0.0837% ( 5) 00:22:29.225 13006.375 - 13107.200: 0.1535% ( 5) 00:22:29.225 13107.200 - 13208.025: 0.2790% ( 9) 00:22:29.225 13208.025 - 13308.849: 0.4464% ( 12) 00:22:29.225 13308.849 - 13409.674: 0.8510% ( 29) 00:22:29.225 13409.674 - 13510.498: 1.1719% ( 23) 00:22:29.226 13510.498 - 13611.323: 1.5485% ( 27) 00:22:29.226 13611.323 - 13712.148: 1.8415% ( 21) 00:22:29.226 13712.148 - 13812.972: 2.1066% ( 19) 00:22:29.226 13812.972 - 13913.797: 2.2600% ( 11) 00:22:29.226 13913.797 - 14014.622: 2.3856% ( 9) 00:22:29.226 14014.622 - 14115.446: 2.4554% ( 5) 00:22:29.226 14115.446 - 14216.271: 2.5530% ( 7) 00:22:29.226 14216.271 - 14317.095: 2.7483% ( 14) 00:22:29.226 14317.095 - 14417.920: 3.1250% ( 27) 00:22:29.226 14417.920 - 14518.745: 3.5575% ( 31) 00:22:29.226 14518.745 - 14619.569: 3.9900% ( 31) 00:22:29.226 14619.569 - 14720.394: 4.4782% ( 35) 00:22:29.226 14720.394 - 14821.218: 4.8270% ( 25) 00:22:29.226 14821.218 - 14922.043: 5.2595% ( 31) 00:22:29.226 14922.043 - 15022.868: 6.0268% ( 55) 00:22:29.226 15022.868 - 15123.692: 6.7662% ( 53) 00:22:29.226 15123.692 - 15224.517: 7.8404% ( 77) 00:22:29.226 15224.517 - 15325.342: 8.8449% ( 72) 00:22:29.226 15325.342 - 15426.166: 10.0167% ( 84) 00:22:29.226 15426.166 - 15526.991: 11.2165% ( 86) 00:22:29.226 15526.991 - 15627.815: 12.9883% ( 127) 00:22:29.226 15627.815 - 15728.640: 14.8298% ( 132) 00:22:29.226 15728.640 - 15829.465: 17.6060% ( 199) 00:22:29.226 15829.465 - 15930.289: 19.9637% ( 169) 00:22:29.226 15930.289 - 16031.114: 22.0843% ( 152) 00:22:29.226 16031.114 - 16131.938: 24.2606% ( 156) 00:22:29.226 16131.938 - 16232.763: 26.0324% ( 127) 00:22:29.226 16232.763 - 16333.588: 28.2506% ( 159) 00:22:29.226 16333.588 - 16434.412: 30.1897% ( 139) 00:22:29.226 16434.412 - 16535.237: 32.2684% ( 149) 00:22:29.226 16535.237 - 16636.062: 35.1842% ( 209) 00:22:29.226 16636.062 - 16736.886: 38.0301% ( 204) 00:22:29.226 16736.886 - 16837.711: 40.0391% ( 144) 00:22:29.226 16837.711 - 16938.535: 42.1735% ( 153) 00:22:29.226 16938.535 - 17039.360: 43.7779% ( 115) 00:22:29.226 17039.360 - 17140.185: 45.9682% ( 157) 00:22:29.226 17140.185 - 17241.009: 47.9492% ( 142) 00:22:29.226 17241.009 - 17341.834: 49.6233% ( 120) 00:22:29.226 17341.834 - 17442.658: 51.6741% ( 147) 00:22:29.226 17442.658 - 17543.483: 53.3482% ( 120) 00:22:29.226 17543.483 - 17644.308: 54.7991% ( 104) 00:22:29.226 17644.308 - 17745.132: 56.3616% ( 112) 00:22:29.226 17745.132 - 17845.957: 58.3984% ( 146) 00:22:29.226 17845.957 - 17946.782: 60.9515% ( 183) 00:22:29.226 17946.782 - 18047.606: 63.0301% ( 149) 00:22:29.226 18047.606 - 18148.431: 64.8577% ( 131) 00:22:29.226 18148.431 - 18249.255: 66.4202% ( 112) 00:22:29.226 18249.255 - 18350.080: 68.1501% ( 124) 00:22:29.226 18350.080 - 18450.905: 69.7405% ( 114) 00:22:29.226 18450.905 - 18551.729: 70.8287% ( 78) 00:22:29.226 18551.729 - 18652.554: 71.7355% ( 65) 00:22:29.226 18652.554 - 18753.378: 72.7958% ( 76) 00:22:29.226 18753.378 - 18854.203: 73.8979% ( 79) 00:22:29.226 18854.203 - 18955.028: 75.0140% ( 80) 00:22:29.226 18955.028 - 19055.852: 76.1579% ( 82) 00:22:29.226 19055.852 - 19156.677: 77.5670% ( 101) 00:22:29.226 19156.677 - 19257.502: 79.0318% ( 105) 00:22:29.226 19257.502 - 19358.326: 80.3153% ( 92) 00:22:29.226 19358.326 - 19459.151: 81.6964% ( 99) 00:22:29.226 19459.151 - 19559.975: 83.2171% ( 109) 00:22:29.226 19559.975 - 19660.800: 84.7098% ( 107) 00:22:29.226 19660.800 - 19761.625: 85.9933% ( 92) 00:22:29.226 19761.625 - 19862.449: 87.3326% ( 96) 00:22:29.226 19862.449 - 19963.274: 88.3510% ( 73) 00:22:29.226 19963.274 - 20064.098: 89.4392% ( 78) 00:22:29.226 20064.098 - 20164.923: 90.3739% ( 67) 00:22:29.226 20164.923 - 20265.748: 91.1691% ( 57) 00:22:29.226 20265.748 - 20366.572: 91.8248% ( 47) 00:22:29.226 20366.572 - 20467.397: 92.3968% ( 41) 00:22:29.226 20467.397 - 20568.222: 92.9129% ( 37) 00:22:29.226 20568.222 - 20669.046: 93.2896% ( 27) 00:22:29.226 20669.046 - 20769.871: 93.5965% ( 22) 00:22:29.226 20769.871 - 20870.695: 93.8895% ( 21) 00:22:29.226 20870.695 - 20971.520: 94.2941% ( 29) 00:22:29.226 20971.520 - 21072.345: 94.8382% ( 39) 00:22:29.226 21072.345 - 21173.169: 95.4241% ( 42) 00:22:29.226 21173.169 - 21273.994: 95.9124% ( 35) 00:22:29.226 21273.994 - 21374.818: 96.3309% ( 30) 00:22:29.226 21374.818 - 21475.643: 96.6518% ( 23) 00:22:29.226 21475.643 - 21576.468: 96.9448% ( 21) 00:22:29.226 21576.468 - 21677.292: 97.1122% ( 12) 00:22:29.226 21677.292 - 21778.117: 97.2935% ( 13) 00:22:29.226 21778.117 - 21878.942: 97.5028% ( 15) 00:22:29.226 21878.942 - 21979.766: 97.6702% ( 12) 00:22:29.226 21979.766 - 22080.591: 97.7958% ( 9) 00:22:29.226 22080.591 - 22181.415: 97.9213% ( 9) 00:22:29.226 22181.415 - 22282.240: 98.0329% ( 8) 00:22:29.226 22282.240 - 22383.065: 98.1166% ( 6) 00:22:29.226 22383.065 - 22483.889: 98.1585% ( 3) 00:22:29.226 22483.889 - 22584.714: 98.2003% ( 3) 00:22:29.226 22584.714 - 22685.538: 98.2143% ( 1) 00:22:29.226 29239.138 - 29440.788: 98.2282% ( 1) 00:22:29.226 29440.788 - 29642.437: 98.3259% ( 7) 00:22:29.226 29642.437 - 29844.086: 98.4096% ( 6) 00:22:29.226 29844.086 - 30045.735: 98.5073% ( 7) 00:22:29.226 30045.735 - 30247.385: 98.6049% ( 7) 00:22:29.226 30247.385 - 30449.034: 98.7165% ( 8) 00:22:29.226 30449.034 - 30650.683: 98.8142% ( 7) 00:22:29.226 30650.683 - 30852.332: 98.8979% ( 6) 00:22:29.226 30852.332 - 31053.982: 98.9816% ( 6) 00:22:29.226 31053.982 - 31255.631: 99.0792% ( 7) 00:22:29.226 31255.631 - 31457.280: 99.1071% ( 2) 00:22:29.226 39523.249 - 39724.898: 99.1908% ( 6) 00:22:29.226 39724.898 - 39926.548: 99.2885% ( 7) 00:22:29.226 39926.548 - 40128.197: 99.3862% ( 7) 00:22:29.226 40128.197 - 40329.846: 99.4838% ( 7) 00:22:29.226 40329.846 - 40531.495: 99.5815% ( 7) 00:22:29.226 40531.495 - 40733.145: 99.6652% ( 6) 00:22:29.226 40733.145 - 40934.794: 99.7628% ( 7) 00:22:29.226 40934.794 - 41136.443: 99.8605% ( 7) 00:22:29.226 41136.443 - 41338.092: 99.9581% ( 7) 00:22:29.226 41338.092 - 41539.742: 100.0000% ( 3) 00:22:29.226 00:22:29.226 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:22:29.226 ============================================================================== 00:22:29.226 Range in us Cumulative IO count 00:22:29.226 11342.769 - 11393.182: 0.0140% ( 1) 00:22:29.226 11443.594 - 11494.006: 0.0279% ( 1) 00:22:29.226 11494.006 - 11544.418: 0.0698% ( 3) 00:22:29.226 11544.418 - 11594.831: 0.1116% ( 3) 00:22:29.226 11594.831 - 11645.243: 0.1395% ( 2) 00:22:29.226 11645.243 - 11695.655: 0.1674% ( 2) 00:22:29.226 11695.655 - 11746.068: 0.2093% ( 3) 00:22:29.226 11746.068 - 11796.480: 0.2232% ( 1) 00:22:29.226 11796.480 - 11846.892: 0.2790% ( 4) 00:22:29.226 11846.892 - 11897.305: 0.3348% ( 4) 00:22:29.226 11897.305 - 11947.717: 0.4464% ( 8) 00:22:29.226 11947.717 - 11998.129: 0.6417% ( 14) 00:22:29.226 11998.129 - 12048.542: 0.6975% ( 4) 00:22:29.226 12048.542 - 12098.954: 0.7673% ( 5) 00:22:29.226 12098.954 - 12149.366: 0.8371% ( 5) 00:22:29.226 12149.366 - 12199.778: 0.8789% ( 3) 00:22:29.226 12199.778 - 12250.191: 0.8929% ( 1) 00:22:29.226 13006.375 - 13107.200: 0.9347% ( 3) 00:22:29.226 13107.200 - 13208.025: 1.0045% ( 5) 00:22:29.226 13208.025 - 13308.849: 1.0882% ( 6) 00:22:29.226 13308.849 - 13409.674: 1.1998% ( 8) 00:22:29.226 13409.674 - 13510.498: 1.4230% ( 16) 00:22:29.226 13510.498 - 13611.323: 1.5067% ( 6) 00:22:29.226 13611.323 - 13712.148: 1.5625% ( 4) 00:22:29.226 13712.148 - 13812.972: 1.6462% ( 6) 00:22:29.226 13812.972 - 13913.797: 1.7857% ( 10) 00:22:29.226 13913.797 - 14014.622: 1.9392% ( 11) 00:22:29.226 14014.622 - 14115.446: 2.1484% ( 15) 00:22:29.226 14115.446 - 14216.271: 2.3717% ( 16) 00:22:29.226 14216.271 - 14317.095: 2.6786% ( 22) 00:22:29.226 14317.095 - 14417.920: 3.2227% ( 39) 00:22:29.226 14417.920 - 14518.745: 3.5993% ( 27) 00:22:29.226 14518.745 - 14619.569: 4.1574% ( 40) 00:22:29.226 14619.569 - 14720.394: 4.7712% ( 44) 00:22:29.226 14720.394 - 14821.218: 5.4548% ( 49) 00:22:29.226 14821.218 - 14922.043: 5.9710% ( 37) 00:22:29.226 14922.043 - 15022.868: 6.5848% ( 44) 00:22:29.226 15022.868 - 15123.692: 7.7846% ( 86) 00:22:29.226 15123.692 - 15224.517: 8.9286% ( 82) 00:22:29.226 15224.517 - 15325.342: 9.9191% ( 71) 00:22:29.226 15325.342 - 15426.166: 10.9794% ( 76) 00:22:29.226 15426.166 - 15526.991: 12.3326% ( 97) 00:22:29.226 15526.991 - 15627.815: 13.8672% ( 110) 00:22:29.226 15627.815 - 15728.640: 16.0296% ( 155) 00:22:29.226 15728.640 - 15829.465: 18.1641% ( 153) 00:22:29.226 15829.465 - 15930.289: 21.1914% ( 217) 00:22:29.226 15930.289 - 16031.114: 23.3259% ( 153) 00:22:29.226 16031.114 - 16131.938: 25.7254% ( 172) 00:22:29.226 16131.938 - 16232.763: 27.8320% ( 151) 00:22:29.226 16232.763 - 16333.588: 30.0921% ( 162) 00:22:29.226 16333.588 - 16434.412: 33.0357% ( 211) 00:22:29.226 16434.412 - 16535.237: 35.6445% ( 187) 00:22:29.226 16535.237 - 16636.062: 37.9883% ( 168) 00:22:29.226 16636.062 - 16736.886: 40.7366% ( 197) 00:22:29.226 16736.886 - 16837.711: 42.8432% ( 151) 00:22:29.226 16837.711 - 16938.535: 44.8661% ( 145) 00:22:29.226 16938.535 - 17039.360: 46.3588% ( 107) 00:22:29.226 17039.360 - 17140.185: 47.9074% ( 111) 00:22:29.226 17140.185 - 17241.009: 49.2606% ( 97) 00:22:29.226 17241.009 - 17341.834: 50.8789% ( 116) 00:22:29.226 17341.834 - 17442.658: 51.8555% ( 70) 00:22:29.226 17442.658 - 17543.483: 52.9018% ( 75) 00:22:29.226 17543.483 - 17644.308: 53.9621% ( 76) 00:22:29.226 17644.308 - 17745.132: 54.9944% ( 74) 00:22:29.226 17745.132 - 17845.957: 55.9849% ( 71) 00:22:29.226 17845.957 - 17946.782: 56.9336% ( 68) 00:22:29.226 17946.782 - 18047.606: 58.3566% ( 102) 00:22:29.226 18047.606 - 18148.431: 60.1283% ( 127) 00:22:29.226 18148.431 - 18249.255: 62.5558% ( 174) 00:22:29.226 18249.255 - 18350.080: 64.7182% ( 155) 00:22:29.226 18350.080 - 18450.905: 66.4481% ( 124) 00:22:29.226 18450.905 - 18551.729: 68.3873% ( 139) 00:22:29.226 18551.729 - 18652.554: 70.3823% ( 143) 00:22:29.226 18652.554 - 18753.378: 71.9727% ( 114) 00:22:29.226 18753.378 - 18854.203: 73.5212% ( 111) 00:22:29.226 18854.203 - 18955.028: 75.3069% ( 128) 00:22:29.226 18955.028 - 19055.852: 76.8415% ( 110) 00:22:29.227 19055.852 - 19156.677: 78.2506% ( 101) 00:22:29.227 19156.677 - 19257.502: 79.6735% ( 102) 00:22:29.227 19257.502 - 19358.326: 80.7338% ( 76) 00:22:29.227 19358.326 - 19459.151: 82.0592% ( 95) 00:22:29.227 19459.151 - 19559.975: 83.7333% ( 120) 00:22:29.227 19559.975 - 19660.800: 85.3655% ( 117) 00:22:29.227 19660.800 - 19761.625: 86.8164% ( 104) 00:22:29.227 19761.625 - 19862.449: 88.2394% ( 102) 00:22:29.227 19862.449 - 19963.274: 89.3555% ( 80) 00:22:29.227 19963.274 - 20064.098: 90.2204% ( 62) 00:22:29.227 20064.098 - 20164.923: 90.9877% ( 55) 00:22:29.227 20164.923 - 20265.748: 91.6295% ( 46) 00:22:29.227 20265.748 - 20366.572: 92.1317% ( 36) 00:22:29.227 20366.572 - 20467.397: 92.6618% ( 38) 00:22:29.227 20467.397 - 20568.222: 93.1501% ( 35) 00:22:29.227 20568.222 - 20669.046: 93.6244% ( 34) 00:22:29.227 20669.046 - 20769.871: 94.0430% ( 30) 00:22:29.227 20769.871 - 20870.695: 94.4336% ( 28) 00:22:29.227 20870.695 - 20971.520: 94.6987% ( 19) 00:22:29.227 20971.520 - 21072.345: 94.9219% ( 16) 00:22:29.227 21072.345 - 21173.169: 95.2567% ( 24) 00:22:29.227 21173.169 - 21273.994: 95.5497% ( 21) 00:22:29.227 21273.994 - 21374.818: 95.8984% ( 25) 00:22:29.227 21374.818 - 21475.643: 96.4844% ( 42) 00:22:29.227 21475.643 - 21576.468: 96.8471% ( 26) 00:22:29.227 21576.468 - 21677.292: 97.0982% ( 18) 00:22:29.227 21677.292 - 21778.117: 97.2377% ( 10) 00:22:29.227 21778.117 - 21878.942: 97.4330% ( 14) 00:22:29.227 21878.942 - 21979.766: 97.5586% ( 9) 00:22:29.227 21979.766 - 22080.591: 97.7260% ( 12) 00:22:29.227 22080.591 - 22181.415: 97.9353% ( 15) 00:22:29.227 22181.415 - 22282.240: 98.0329% ( 7) 00:22:29.227 22282.240 - 22383.065: 98.1027% ( 5) 00:22:29.227 22383.065 - 22483.889: 98.1724% ( 5) 00:22:29.227 22483.889 - 22584.714: 98.2143% ( 3) 00:22:29.227 29239.138 - 29440.788: 98.2840% ( 5) 00:22:29.227 29440.788 - 29642.437: 98.3677% ( 6) 00:22:29.227 29642.437 - 29844.086: 98.4654% ( 7) 00:22:29.227 29844.086 - 30045.735: 98.5770% ( 8) 00:22:29.227 30045.735 - 30247.385: 98.6747% ( 7) 00:22:29.227 30247.385 - 30449.034: 98.7723% ( 7) 00:22:29.227 30449.034 - 30650.683: 98.8560% ( 6) 00:22:29.227 30650.683 - 30852.332: 98.9676% ( 8) 00:22:29.227 30852.332 - 31053.982: 99.0653% ( 7) 00:22:29.227 31053.982 - 31255.631: 99.1071% ( 3) 00:22:29.227 39119.951 - 39321.600: 99.1629% ( 4) 00:22:29.227 39321.600 - 39523.249: 99.2606% ( 7) 00:22:29.227 39523.249 - 39724.898: 99.3443% ( 6) 00:22:29.227 39724.898 - 39926.548: 99.4280% ( 6) 00:22:29.227 39926.548 - 40128.197: 99.5257% ( 7) 00:22:29.227 40128.197 - 40329.846: 99.6233% ( 7) 00:22:29.227 40329.846 - 40531.495: 99.7210% ( 7) 00:22:29.227 40531.495 - 40733.145: 99.8186% ( 7) 00:22:29.227 40733.145 - 40934.794: 99.9163% ( 7) 00:22:29.227 40934.794 - 41136.443: 100.0000% ( 6) 00:22:29.227 00:22:29.227 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:22:29.227 ============================================================================== 00:22:29.227 Range in us Cumulative IO count 00:22:29.227 10788.234 - 10838.646: 0.0415% ( 3) 00:22:29.227 10838.646 - 10889.058: 0.0691% ( 2) 00:22:29.227 10889.058 - 10939.471: 0.0968% ( 2) 00:22:29.227 10939.471 - 10989.883: 0.1383% ( 3) 00:22:29.227 10989.883 - 11040.295: 0.1936% ( 4) 00:22:29.227 11040.295 - 11090.708: 0.2074% ( 1) 00:22:29.227 11090.708 - 11141.120: 0.2627% ( 4) 00:22:29.227 11191.532 - 11241.945: 0.2765% ( 1) 00:22:29.227 11241.945 - 11292.357: 0.4425% ( 12) 00:22:29.227 11292.357 - 11342.769: 0.5808% ( 10) 00:22:29.227 11342.769 - 11393.182: 0.6084% ( 2) 00:22:29.227 11393.182 - 11443.594: 0.6361% ( 2) 00:22:29.227 11443.594 - 11494.006: 0.6637% ( 2) 00:22:29.227 11494.006 - 11544.418: 0.6914% ( 2) 00:22:29.227 11544.418 - 11594.831: 0.7190% ( 2) 00:22:29.227 11594.831 - 11645.243: 0.7329% ( 1) 00:22:29.227 11695.655 - 11746.068: 0.7882% ( 4) 00:22:29.227 11746.068 - 11796.480: 0.8850% ( 7) 00:22:29.227 13208.025 - 13308.849: 0.9126% ( 2) 00:22:29.227 13308.849 - 13409.674: 1.0647% ( 11) 00:22:29.227 13409.674 - 13510.498: 1.1753% ( 8) 00:22:29.227 13510.498 - 13611.323: 1.2998% ( 9) 00:22:29.227 13611.323 - 13712.148: 1.5210% ( 16) 00:22:29.227 13712.148 - 13812.972: 1.6869% ( 12) 00:22:29.227 13812.972 - 13913.797: 1.9220% ( 17) 00:22:29.227 13913.797 - 14014.622: 2.1156% ( 14) 00:22:29.227 14014.622 - 14115.446: 2.5581% ( 32) 00:22:29.227 14115.446 - 14216.271: 2.7931% ( 17) 00:22:29.227 14216.271 - 14317.095: 3.0697% ( 20) 00:22:29.227 14317.095 - 14417.920: 3.6090% ( 39) 00:22:29.227 14417.920 - 14518.745: 3.8993% ( 21) 00:22:29.227 14518.745 - 14619.569: 4.4386% ( 39) 00:22:29.227 14619.569 - 14720.394: 4.9640% ( 38) 00:22:29.227 14720.394 - 14821.218: 5.3789% ( 30) 00:22:29.227 14821.218 - 14922.043: 5.9458% ( 41) 00:22:29.227 14922.043 - 15022.868: 6.3744% ( 31) 00:22:29.227 15022.868 - 15123.692: 7.0520% ( 49) 00:22:29.227 15123.692 - 15224.517: 7.8678% ( 59) 00:22:29.227 15224.517 - 15325.342: 8.9602% ( 79) 00:22:29.227 15325.342 - 15426.166: 10.2323% ( 92) 00:22:29.227 15426.166 - 15526.991: 12.0990% ( 135) 00:22:29.227 15526.991 - 15627.815: 14.3252% ( 161) 00:22:29.227 15627.815 - 15728.640: 16.7312% ( 174) 00:22:29.227 15728.640 - 15829.465: 19.2063% ( 179) 00:22:29.227 15829.465 - 15930.289: 21.3357% ( 154) 00:22:29.227 15930.289 - 16031.114: 23.5619% ( 161) 00:22:29.227 16031.114 - 16131.938: 25.6499% ( 151) 00:22:29.227 16131.938 - 16232.763: 27.6134% ( 142) 00:22:29.227 16232.763 - 16333.588: 30.2959% ( 194) 00:22:29.227 16333.588 - 16434.412: 32.7434% ( 177) 00:22:29.227 16434.412 - 16535.237: 34.9004% ( 156) 00:22:29.227 16535.237 - 16636.062: 36.8778% ( 143) 00:22:29.227 16636.062 - 16736.886: 38.8274% ( 141) 00:22:29.227 16736.886 - 16837.711: 41.0260% ( 159) 00:22:29.227 16837.711 - 16938.535: 43.2384% ( 160) 00:22:29.227 16938.535 - 17039.360: 45.4231% ( 158) 00:22:29.227 17039.360 - 17140.185: 47.5387% ( 153) 00:22:29.227 17140.185 - 17241.009: 48.7832% ( 90) 00:22:29.227 17241.009 - 17341.834: 50.0830% ( 94) 00:22:29.227 17341.834 - 17442.658: 51.2998% ( 88) 00:22:29.227 17442.658 - 17543.483: 52.7793% ( 107) 00:22:29.227 17543.483 - 17644.308: 54.3003% ( 110) 00:22:29.227 17644.308 - 17745.132: 55.7799% ( 107) 00:22:29.227 17745.132 - 17845.957: 57.1073% ( 96) 00:22:29.227 17845.957 - 17946.782: 58.4209% ( 95) 00:22:29.227 17946.782 - 18047.606: 59.7622% ( 97) 00:22:29.227 18047.606 - 18148.431: 61.8639% ( 152) 00:22:29.227 18148.431 - 18249.255: 64.1869% ( 168) 00:22:29.227 18249.255 - 18350.080: 66.5514% ( 171) 00:22:29.227 18350.080 - 18450.905: 68.5149% ( 142) 00:22:29.227 18450.905 - 18551.729: 70.7688% ( 163) 00:22:29.227 18551.729 - 18652.554: 72.5664% ( 130) 00:22:29.227 18652.554 - 18753.378: 74.0459% ( 107) 00:22:29.227 18753.378 - 18854.203: 75.7052% ( 120) 00:22:29.227 18854.203 - 18955.028: 77.0326% ( 96) 00:22:29.227 18955.028 - 19055.852: 78.1388% ( 80) 00:22:29.227 19055.852 - 19156.677: 79.3695% ( 89) 00:22:29.227 19156.677 - 19257.502: 80.5033% ( 82) 00:22:29.227 19257.502 - 19358.326: 81.7340% ( 89) 00:22:29.227 19358.326 - 19459.151: 82.9784% ( 90) 00:22:29.227 19459.151 - 19559.975: 84.2920% ( 95) 00:22:29.227 19559.975 - 19660.800: 85.2738% ( 71) 00:22:29.227 19660.800 - 19761.625: 86.3523% ( 78) 00:22:29.227 19761.625 - 19862.449: 87.3064% ( 69) 00:22:29.227 19862.449 - 19963.274: 88.3158% ( 73) 00:22:29.227 19963.274 - 20064.098: 89.7124% ( 101) 00:22:29.227 20064.098 - 20164.923: 90.6527% ( 68) 00:22:29.227 20164.923 - 20265.748: 91.5238% ( 63) 00:22:29.227 20265.748 - 20366.572: 92.2981% ( 56) 00:22:29.227 20366.572 - 20467.397: 92.9757% ( 49) 00:22:29.227 20467.397 - 20568.222: 93.6947% ( 52) 00:22:29.227 20568.222 - 20669.046: 94.3446% ( 47) 00:22:29.227 20669.046 - 20769.871: 94.7041% ( 26) 00:22:29.227 20769.871 - 20870.695: 95.0498% ( 25) 00:22:29.227 20870.695 - 20971.520: 95.3125% ( 19) 00:22:29.227 20971.520 - 21072.345: 95.6444% ( 24) 00:22:29.227 21072.345 - 21173.169: 95.8794% ( 17) 00:22:29.227 21173.169 - 21273.994: 96.0868% ( 15) 00:22:29.227 21273.994 - 21374.818: 96.4187% ( 24) 00:22:29.227 21374.818 - 21475.643: 96.7367% ( 23) 00:22:29.227 21475.643 - 21576.468: 97.0133% ( 20) 00:22:29.227 21576.468 - 21677.292: 97.2483% ( 17) 00:22:29.227 21677.292 - 21778.117: 97.4419% ( 14) 00:22:29.227 21778.117 - 21878.942: 97.6355% ( 14) 00:22:29.227 21878.942 - 21979.766: 97.8844% ( 18) 00:22:29.227 21979.766 - 22080.591: 98.1056% ( 16) 00:22:29.227 22080.591 - 22181.415: 98.2439% ( 10) 00:22:29.227 22181.415 - 22282.240: 98.3960% ( 11) 00:22:29.227 22282.240 - 22383.065: 98.5066% ( 8) 00:22:29.227 22383.065 - 22483.889: 98.6587% ( 11) 00:22:29.227 22483.889 - 22584.714: 98.8108% ( 11) 00:22:29.227 22584.714 - 22685.538: 98.9353% ( 9) 00:22:29.227 22685.538 - 22786.363: 99.0183% ( 6) 00:22:29.227 22786.363 - 22887.188: 99.0874% ( 5) 00:22:29.227 22887.188 - 22988.012: 99.1150% ( 2) 00:22:29.227 28835.840 - 29037.489: 99.2118% ( 7) 00:22:29.227 29037.489 - 29239.138: 99.2948% ( 6) 00:22:29.227 29239.138 - 29440.788: 99.3916% ( 7) 00:22:29.227 29440.788 - 29642.437: 99.4884% ( 7) 00:22:29.227 29642.437 - 29844.086: 99.5852% ( 7) 00:22:29.227 29844.086 - 30045.735: 99.6820% ( 7) 00:22:29.227 30045.735 - 30247.385: 99.7788% ( 7) 00:22:29.227 30247.385 - 30449.034: 99.8756% ( 7) 00:22:29.227 30449.034 - 30650.683: 99.9585% ( 6) 00:22:29.227 30650.683 - 30852.332: 100.0000% ( 3) 00:22:29.227 00:22:29.227 ************************************ 00:22:29.227 END TEST nvme_perf 00:22:29.227 ************************************ 00:22:29.227 23:04:07 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:22:29.227 00:22:29.227 real 0m2.580s 00:22:29.227 user 0m2.212s 00:22:29.227 sys 0m0.243s 00:22:29.227 23:04:07 nvme.nvme_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:29.227 23:04:07 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:22:29.228 23:04:07 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:22:29.228 23:04:07 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:22:29.228 23:04:07 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:29.228 23:04:07 nvme -- common/autotest_common.sh@10 -- # set +x 00:22:29.228 ************************************ 00:22:29.228 START TEST nvme_hello_world 00:22:29.228 ************************************ 00:22:29.228 23:04:07 nvme.nvme_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:22:29.228 Initializing NVMe Controllers 00:22:29.228 Attached to 0000:00:11.0 00:22:29.228 Namespace ID: 1 size: 5GB 00:22:29.228 Attached to 0000:00:13.0 00:22:29.228 Namespace ID: 1 size: 1GB 00:22:29.228 Attached to 0000:00:10.0 00:22:29.228 Namespace ID: 1 size: 6GB 00:22:29.228 Attached to 0000:00:12.0 00:22:29.228 Namespace ID: 1 size: 4GB 00:22:29.228 Namespace ID: 2 size: 4GB 00:22:29.228 Namespace ID: 3 size: 4GB 00:22:29.228 Initialization complete. 00:22:29.228 INFO: using host memory buffer for IO 00:22:29.228 Hello world! 00:22:29.228 INFO: using host memory buffer for IO 00:22:29.228 Hello world! 00:22:29.228 INFO: using host memory buffer for IO 00:22:29.228 Hello world! 00:22:29.228 INFO: using host memory buffer for IO 00:22:29.228 Hello world! 00:22:29.228 INFO: using host memory buffer for IO 00:22:29.228 Hello world! 00:22:29.228 INFO: using host memory buffer for IO 00:22:29.228 Hello world! 00:22:29.489 ************************************ 00:22:29.489 END TEST nvme_hello_world 00:22:29.489 ************************************ 00:22:29.489 00:22:29.489 real 0m0.267s 00:22:29.489 user 0m0.094s 00:22:29.489 sys 0m0.120s 00:22:29.489 23:04:07 nvme.nvme_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:29.489 23:04:07 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:22:29.489 23:04:07 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:22:29.489 23:04:07 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:29.489 23:04:07 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:29.489 23:04:07 nvme -- common/autotest_common.sh@10 -- # set +x 00:22:29.489 ************************************ 00:22:29.489 START TEST nvme_sgl 00:22:29.489 ************************************ 00:22:29.489 23:04:07 nvme.nvme_sgl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:22:29.751 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:22:29.751 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:22:29.751 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:22:29.751 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:22:29.751 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:22:29.751 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:22:29.751 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:22:29.751 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:22:29.751 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:22:29.751 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:22:29.751 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:22:29.751 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:22:29.752 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:22:29.752 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:22:29.752 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:22:29.752 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:22:29.752 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:22:29.752 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:22:29.752 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:22:29.752 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:22:29.752 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:22:29.752 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:22:29.752 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:22:29.752 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:22:29.752 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:22:29.752 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:22:29.752 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:22:29.752 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:22:29.752 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:22:29.752 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:22:29.752 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:22:29.752 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:22:29.752 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:22:29.752 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:22:29.752 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:22:29.752 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:22:29.752 NVMe Readv/Writev Request test 00:22:29.752 Attached to 0000:00:11.0 00:22:29.752 Attached to 0000:00:13.0 00:22:29.752 Attached to 0000:00:10.0 00:22:29.752 Attached to 0000:00:12.0 00:22:29.752 0000:00:11.0: build_io_request_2 test passed 00:22:29.752 0000:00:11.0: build_io_request_4 test passed 00:22:29.752 0000:00:11.0: build_io_request_5 test passed 00:22:29.752 0000:00:11.0: build_io_request_6 test passed 00:22:29.752 0000:00:11.0: build_io_request_7 test passed 00:22:29.752 0000:00:11.0: build_io_request_10 test passed 00:22:29.752 0000:00:10.0: build_io_request_2 test passed 00:22:29.752 0000:00:10.0: build_io_request_4 test passed 00:22:29.752 0000:00:10.0: build_io_request_5 test passed 00:22:29.752 0000:00:10.0: build_io_request_6 test passed 00:22:29.752 0000:00:10.0: build_io_request_7 test passed 00:22:29.752 0000:00:10.0: build_io_request_10 test passed 00:22:29.752 Cleaning up... 00:22:29.752 00:22:29.752 real 0m0.347s 00:22:29.752 user 0m0.185s 00:22:29.752 sys 0m0.112s 00:22:29.752 23:04:08 nvme.nvme_sgl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:29.752 ************************************ 00:22:29.752 23:04:08 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:22:29.752 END TEST nvme_sgl 00:22:29.752 ************************************ 00:22:29.752 23:04:08 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:22:29.752 23:04:08 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:29.752 23:04:08 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:29.752 23:04:08 nvme -- common/autotest_common.sh@10 -- # set +x 00:22:29.752 ************************************ 00:22:29.752 START TEST nvme_e2edp 00:22:29.752 ************************************ 00:22:29.752 23:04:08 nvme.nvme_e2edp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:22:30.016 NVMe Write/Read with End-to-End data protection test 00:22:30.017 Attached to 0000:00:11.0 00:22:30.017 Attached to 0000:00:13.0 00:22:30.017 Attached to 0000:00:10.0 00:22:30.017 Attached to 0000:00:12.0 00:22:30.017 Cleaning up... 00:22:30.017 ************************************ 00:22:30.017 END TEST nvme_e2edp 00:22:30.017 ************************************ 00:22:30.017 00:22:30.017 real 0m0.236s 00:22:30.017 user 0m0.071s 00:22:30.017 sys 0m0.117s 00:22:30.017 23:04:08 nvme.nvme_e2edp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:30.017 23:04:08 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:22:30.278 23:04:08 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:22:30.278 23:04:08 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:30.278 23:04:08 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:30.278 23:04:08 nvme -- common/autotest_common.sh@10 -- # set +x 00:22:30.278 ************************************ 00:22:30.278 START TEST nvme_reserve 00:22:30.278 ************************************ 00:22:30.278 23:04:08 nvme.nvme_reserve -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:22:30.278 ===================================================== 00:22:30.278 NVMe Controller at PCI bus 0, device 17, function 0 00:22:30.278 ===================================================== 00:22:30.278 Reservations: Not Supported 00:22:30.278 ===================================================== 00:22:30.278 NVMe Controller at PCI bus 0, device 19, function 0 00:22:30.278 ===================================================== 00:22:30.278 Reservations: Not Supported 00:22:30.278 ===================================================== 00:22:30.278 NVMe Controller at PCI bus 0, device 16, function 0 00:22:30.278 ===================================================== 00:22:30.278 Reservations: Not Supported 00:22:30.278 ===================================================== 00:22:30.278 NVMe Controller at PCI bus 0, device 18, function 0 00:22:30.278 ===================================================== 00:22:30.278 Reservations: Not Supported 00:22:30.278 Reservation test passed 00:22:30.539 ************************************ 00:22:30.539 END TEST nvme_reserve 00:22:30.540 ************************************ 00:22:30.540 00:22:30.540 real 0m0.243s 00:22:30.540 user 0m0.072s 00:22:30.540 sys 0m0.125s 00:22:30.540 23:04:08 nvme.nvme_reserve -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:30.540 23:04:08 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:22:30.540 23:04:08 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:22:30.540 23:04:08 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:30.540 23:04:08 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:30.540 23:04:08 nvme -- common/autotest_common.sh@10 -- # set +x 00:22:30.540 ************************************ 00:22:30.540 START TEST nvme_err_injection 00:22:30.540 ************************************ 00:22:30.540 23:04:08 nvme.nvme_err_injection -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:22:30.896 NVMe Error Injection test 00:22:30.896 Attached to 0000:00:11.0 00:22:30.896 Attached to 0000:00:13.0 00:22:30.896 Attached to 0000:00:10.0 00:22:30.896 Attached to 0000:00:12.0 00:22:30.896 0000:00:11.0: get features failed as expected 00:22:30.896 0000:00:13.0: get features failed as expected 00:22:30.896 0000:00:10.0: get features failed as expected 00:22:30.896 0000:00:12.0: get features failed as expected 00:22:30.896 0000:00:12.0: get features successfully as expected 00:22:30.896 0000:00:11.0: get features successfully as expected 00:22:30.896 0000:00:13.0: get features successfully as expected 00:22:30.896 0000:00:10.0: get features successfully as expected 00:22:30.896 0000:00:12.0: read failed as expected 00:22:30.896 0000:00:11.0: read failed as expected 00:22:30.896 0000:00:13.0: read failed as expected 00:22:30.896 0000:00:10.0: read failed as expected 00:22:30.896 0000:00:12.0: read successfully as expected 00:22:30.896 0000:00:11.0: read successfully as expected 00:22:30.896 0000:00:13.0: read successfully as expected 00:22:30.896 0000:00:10.0: read successfully as expected 00:22:30.896 Cleaning up... 00:22:30.896 ************************************ 00:22:30.896 END TEST nvme_err_injection 00:22:30.896 ************************************ 00:22:30.896 00:22:30.896 real 0m0.254s 00:22:30.896 user 0m0.097s 00:22:30.896 sys 0m0.108s 00:22:30.896 23:04:09 nvme.nvme_err_injection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:30.896 23:04:09 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:22:30.896 23:04:09 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:22:30.896 23:04:09 nvme -- common/autotest_common.sh@1105 -- # '[' 9 -le 1 ']' 00:22:30.897 23:04:09 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:30.897 23:04:09 nvme -- common/autotest_common.sh@10 -- # set +x 00:22:30.897 ************************************ 00:22:30.897 START TEST nvme_overhead 00:22:30.897 ************************************ 00:22:30.897 23:04:09 nvme.nvme_overhead -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:22:32.288 Initializing NVMe Controllers 00:22:32.288 Attached to 0000:00:11.0 00:22:32.288 Attached to 0000:00:13.0 00:22:32.288 Attached to 0000:00:10.0 00:22:32.288 Attached to 0000:00:12.0 00:22:32.288 Initialization complete. Launching workers. 00:22:32.288 submit (in ns) avg, min, max = 17004.4, 12740.0, 162503.1 00:22:32.288 complete (in ns) avg, min, max = 11249.0, 8280.8, 138462.3 00:22:32.288 00:22:32.288 Submit histogram 00:22:32.288 ================ 00:22:32.288 Range in us Cumulative Count 00:22:32.288 12.702 - 12.800: 0.0373% ( 1) 00:22:32.288 12.898 - 12.997: 0.0747% ( 1) 00:22:32.288 13.588 - 13.686: 0.2614% ( 5) 00:22:32.288 13.686 - 13.785: 0.5601% ( 8) 00:22:32.288 13.785 - 13.883: 0.9335% ( 10) 00:22:32.288 13.883 - 13.982: 1.6430% ( 19) 00:22:32.288 13.982 - 14.080: 2.3152% ( 18) 00:22:32.288 14.080 - 14.178: 3.2114% ( 24) 00:22:32.288 14.178 - 14.277: 4.7423% ( 41) 00:22:32.288 14.277 - 14.375: 6.8335% ( 56) 00:22:32.288 14.375 - 14.474: 8.9619% ( 57) 00:22:32.288 14.474 - 14.572: 12.1359% ( 85) 00:22:32.288 14.572 - 14.671: 15.8327% ( 99) 00:22:32.289 14.671 - 14.769: 19.9776% ( 111) 00:22:32.289 14.769 - 14.868: 25.9895% ( 161) 00:22:32.289 14.868 - 14.966: 33.3458% ( 197) 00:22:32.289 14.966 - 15.065: 39.6938% ( 170) 00:22:32.289 15.065 - 15.163: 46.6019% ( 185) 00:22:32.289 15.163 - 15.262: 53.9582% ( 197) 00:22:32.289 15.262 - 15.360: 60.3435% ( 171) 00:22:32.289 15.360 - 15.458: 64.4511% ( 110) 00:22:32.289 15.458 - 15.557: 67.9612% ( 94) 00:22:32.289 15.557 - 15.655: 70.3510% ( 64) 00:22:32.289 15.655 - 15.754: 72.5915% ( 60) 00:22:32.289 15.754 - 15.852: 74.0105% ( 38) 00:22:32.289 15.852 - 15.951: 74.7946% ( 21) 00:22:32.289 15.951 - 16.049: 76.0642% ( 34) 00:22:32.289 16.049 - 16.148: 76.4750% ( 11) 00:22:32.289 16.148 - 16.246: 77.4459% ( 26) 00:22:32.289 16.246 - 16.345: 77.9313% ( 13) 00:22:32.289 16.345 - 16.443: 78.3794% ( 12) 00:22:32.289 16.443 - 16.542: 78.7901% ( 11) 00:22:32.289 16.542 - 16.640: 79.1262% ( 9) 00:22:32.289 16.640 - 16.738: 79.3503% ( 6) 00:22:32.289 16.738 - 16.837: 79.7237% ( 10) 00:22:32.289 16.837 - 16.935: 80.0597% ( 9) 00:22:32.289 16.935 - 17.034: 80.2465% ( 5) 00:22:32.289 17.034 - 17.132: 80.9933% ( 20) 00:22:32.289 17.132 - 17.231: 81.3667% ( 10) 00:22:32.289 17.231 - 17.329: 81.5534% ( 5) 00:22:32.289 17.329 - 17.428: 81.8895% ( 9) 00:22:32.289 17.428 - 17.526: 82.3002% ( 11) 00:22:32.289 17.526 - 17.625: 82.4869% ( 5) 00:22:32.289 17.625 - 17.723: 82.7483% ( 7) 00:22:32.289 17.723 - 17.822: 82.8230% ( 2) 00:22:32.289 17.822 - 17.920: 83.0471% ( 6) 00:22:32.289 17.920 - 18.018: 83.4205% ( 10) 00:22:32.289 18.018 - 18.117: 83.7192% ( 8) 00:22:32.289 18.117 - 18.215: 83.9432% ( 6) 00:22:32.289 18.215 - 18.314: 84.2793% ( 9) 00:22:32.289 18.314 - 18.412: 84.5034% ( 6) 00:22:32.289 18.412 - 18.511: 84.7647% ( 7) 00:22:32.289 18.511 - 18.609: 84.9141% ( 4) 00:22:32.289 18.609 - 18.708: 85.0261% ( 3) 00:22:32.289 18.708 - 18.806: 85.5116% ( 13) 00:22:32.289 18.806 - 18.905: 85.8850% ( 10) 00:22:32.289 18.905 - 19.003: 86.0717% ( 5) 00:22:32.289 19.003 - 19.102: 86.3704% ( 8) 00:22:32.289 19.102 - 19.200: 86.6692% ( 8) 00:22:32.289 19.200 - 19.298: 86.8932% ( 6) 00:22:32.289 19.298 - 19.397: 87.2666% ( 10) 00:22:32.289 19.397 - 19.495: 87.5280% ( 7) 00:22:32.289 19.495 - 19.594: 87.9761% ( 12) 00:22:32.289 19.594 - 19.692: 88.3495% ( 10) 00:22:32.289 19.692 - 19.791: 88.7229% ( 10) 00:22:32.289 19.791 - 19.889: 88.9096% ( 5) 00:22:32.289 19.889 - 19.988: 89.2457% ( 9) 00:22:32.289 19.988 - 20.086: 89.4698% ( 6) 00:22:32.289 20.086 - 20.185: 89.6565% ( 5) 00:22:32.289 20.185 - 20.283: 89.8058% ( 4) 00:22:32.289 20.283 - 20.382: 90.0672% ( 7) 00:22:32.289 20.382 - 20.480: 90.2166% ( 4) 00:22:32.289 20.480 - 20.578: 90.4033% ( 5) 00:22:32.289 20.578 - 20.677: 90.4780% ( 2) 00:22:32.289 20.677 - 20.775: 90.5900% ( 3) 00:22:32.289 20.775 - 20.874: 90.7020% ( 3) 00:22:32.289 20.874 - 20.972: 90.8887% ( 5) 00:22:32.289 20.972 - 21.071: 91.0007% ( 3) 00:22:32.289 21.071 - 21.169: 91.0754% ( 2) 00:22:32.289 21.169 - 21.268: 91.1501% ( 2) 00:22:32.289 21.268 - 21.366: 91.2248% ( 2) 00:22:32.289 21.366 - 21.465: 91.2621% ( 1) 00:22:32.289 21.465 - 21.563: 91.2995% ( 1) 00:22:32.289 21.563 - 21.662: 91.3742% ( 2) 00:22:32.289 21.760 - 21.858: 91.5982% ( 6) 00:22:32.289 21.858 - 21.957: 91.7849% ( 5) 00:22:32.289 21.957 - 22.055: 91.8223% ( 1) 00:22:32.289 22.055 - 22.154: 91.8969% ( 2) 00:22:32.289 22.154 - 22.252: 92.1210% ( 6) 00:22:32.289 22.252 - 22.351: 92.3077% ( 5) 00:22:32.289 22.351 - 22.449: 92.4197% ( 3) 00:22:32.289 22.449 - 22.548: 92.5691% ( 4) 00:22:32.289 22.548 - 22.646: 92.6811% ( 3) 00:22:32.289 22.646 - 22.745: 92.7931% ( 3) 00:22:32.289 22.745 - 22.843: 92.8678% ( 2) 00:22:32.289 22.843 - 22.942: 93.0919% ( 6) 00:22:32.289 22.942 - 23.040: 93.1665% ( 2) 00:22:32.289 23.040 - 23.138: 93.2412% ( 2) 00:22:32.289 23.138 - 23.237: 93.3906% ( 4) 00:22:32.289 23.237 - 23.335: 93.4279% ( 1) 00:22:32.289 23.335 - 23.434: 93.4653% ( 1) 00:22:32.289 23.434 - 23.532: 93.5400% ( 2) 00:22:32.289 23.631 - 23.729: 93.5773% ( 1) 00:22:32.289 23.729 - 23.828: 93.6893% ( 3) 00:22:32.289 23.828 - 23.926: 93.7267% ( 1) 00:22:32.289 23.926 - 24.025: 93.8387% ( 3) 00:22:32.289 24.025 - 24.123: 94.0254% ( 5) 00:22:32.289 24.123 - 24.222: 94.1748% ( 4) 00:22:32.289 24.222 - 24.320: 94.2121% ( 1) 00:22:32.289 24.320 - 24.418: 94.2868% ( 2) 00:22:32.289 24.418 - 24.517: 94.5108% ( 6) 00:22:32.289 24.517 - 24.615: 94.5855% ( 2) 00:22:32.289 24.615 - 24.714: 94.6602% ( 2) 00:22:32.289 24.714 - 24.812: 94.7722% ( 3) 00:22:32.289 24.812 - 24.911: 94.8469% ( 2) 00:22:32.289 24.911 - 25.009: 94.9216% ( 2) 00:22:32.289 25.108 - 25.206: 94.9963% ( 2) 00:22:32.289 25.206 - 25.403: 95.0336% ( 1) 00:22:32.289 25.403 - 25.600: 95.0709% ( 1) 00:22:32.289 25.600 - 25.797: 95.2950% ( 6) 00:22:32.289 25.797 - 25.994: 95.3697% ( 2) 00:22:32.289 25.994 - 26.191: 95.4817% ( 3) 00:22:32.289 26.191 - 26.388: 95.5190% ( 1) 00:22:32.289 26.388 - 26.585: 95.5937% ( 2) 00:22:32.289 26.585 - 26.782: 95.6311% ( 1) 00:22:32.289 26.782 - 26.978: 95.7431% ( 3) 00:22:32.289 26.978 - 27.175: 95.8178% ( 2) 00:22:32.289 27.175 - 27.372: 95.8551% ( 1) 00:22:32.289 27.372 - 27.569: 95.9298% ( 2) 00:22:32.289 27.569 - 27.766: 95.9671% ( 1) 00:22:32.289 27.766 - 27.963: 96.0045% ( 1) 00:22:32.289 27.963 - 28.160: 96.1912% ( 5) 00:22:32.289 28.160 - 28.357: 96.2285% ( 1) 00:22:32.289 28.357 - 28.554: 96.2659% ( 1) 00:22:32.289 28.554 - 28.751: 96.3406% ( 2) 00:22:32.289 28.751 - 28.948: 96.3779% ( 1) 00:22:32.289 28.948 - 29.145: 96.5273% ( 4) 00:22:32.289 29.145 - 29.342: 96.5646% ( 1) 00:22:32.289 29.342 - 29.538: 96.6766% ( 3) 00:22:32.289 29.735 - 29.932: 96.7513% ( 2) 00:22:32.289 29.932 - 30.129: 96.8260% ( 2) 00:22:32.289 30.129 - 30.326: 96.9007% ( 2) 00:22:32.289 30.326 - 30.523: 97.0874% ( 5) 00:22:32.289 30.917 - 31.114: 97.1247% ( 1) 00:22:32.289 31.114 - 31.311: 97.1621% ( 1) 00:22:32.289 31.311 - 31.508: 97.1994% ( 1) 00:22:32.289 31.508 - 31.705: 97.2741% ( 2) 00:22:32.289 31.705 - 31.902: 97.3488% ( 2) 00:22:32.289 32.098 - 32.295: 97.3861% ( 1) 00:22:32.289 32.295 - 32.492: 97.4235% ( 1) 00:22:32.289 32.492 - 32.689: 97.5728% ( 4) 00:22:32.289 32.689 - 32.886: 97.7222% ( 4) 00:22:32.289 32.886 - 33.083: 97.8342% ( 3) 00:22:32.289 33.083 - 33.280: 97.9089% ( 2) 00:22:32.289 33.280 - 33.477: 97.9462% ( 1) 00:22:32.289 33.477 - 33.674: 97.9836% ( 1) 00:22:32.289 33.674 - 33.871: 98.0956% ( 3) 00:22:32.289 34.068 - 34.265: 98.1329% ( 1) 00:22:32.289 34.658 - 34.855: 98.2076% ( 2) 00:22:32.289 35.052 - 35.249: 98.2823% ( 2) 00:22:32.289 35.643 - 35.840: 98.3196% ( 1) 00:22:32.289 35.840 - 36.037: 98.3943% ( 2) 00:22:32.289 36.234 - 36.431: 98.4317% ( 1) 00:22:32.289 36.431 - 36.628: 98.5063% ( 2) 00:22:32.289 36.628 - 36.825: 98.5437% ( 1) 00:22:32.289 37.415 - 37.612: 98.5810% ( 1) 00:22:32.289 38.006 - 38.203: 98.6184% ( 1) 00:22:32.289 38.597 - 38.794: 98.6557% ( 1) 00:22:32.289 39.778 - 39.975: 98.7304% ( 2) 00:22:32.289 40.369 - 40.566: 98.7677% ( 1) 00:22:32.289 40.960 - 41.157: 98.8051% ( 1) 00:22:32.289 41.157 - 41.354: 98.8798% ( 2) 00:22:32.289 41.551 - 41.748: 98.9171% ( 1) 00:22:32.289 41.748 - 41.945: 98.9544% ( 1) 00:22:32.289 42.142 - 42.338: 98.9918% ( 1) 00:22:32.289 42.338 - 42.535: 99.0665% ( 2) 00:22:32.289 43.717 - 43.914: 99.1038% ( 1) 00:22:32.289 44.111 - 44.308: 99.1412% ( 1) 00:22:32.289 44.702 - 44.898: 99.1785% ( 1) 00:22:32.289 45.292 - 45.489: 99.2158% ( 1) 00:22:32.289 51.594 - 51.988: 99.2532% ( 1) 00:22:32.289 52.775 - 53.169: 99.2905% ( 1) 00:22:32.289 54.351 - 54.745: 99.3279% ( 1) 00:22:32.289 57.502 - 57.895: 99.3652% ( 1) 00:22:32.289 58.683 - 59.077: 99.4025% ( 1) 00:22:32.289 62.228 - 62.622: 99.4399% ( 1) 00:22:32.289 63.015 - 63.409: 99.4772% ( 1) 00:22:32.289 64.197 - 64.591: 99.5146% ( 1) 00:22:32.289 66.166 - 66.560: 99.5519% ( 1) 00:22:32.289 68.135 - 68.529: 99.5892% ( 1) 00:22:32.289 68.923 - 69.317: 99.6266% ( 1) 00:22:32.290 71.286 - 71.680: 99.6639% ( 1) 00:22:32.290 71.680 - 72.074: 99.7013% ( 1) 00:22:32.290 72.074 - 72.468: 99.7386% ( 1) 00:22:32.290 75.618 - 76.012: 99.7760% ( 1) 00:22:32.290 80.738 - 81.132: 99.8133% ( 1) 00:22:32.290 81.526 - 81.920: 99.8506% ( 1) 00:22:32.290 85.858 - 86.252: 99.8880% ( 1) 00:22:32.290 111.065 - 111.852: 99.9253% ( 1) 00:22:32.290 126.031 - 126.818: 99.9627% ( 1) 00:22:32.290 162.265 - 163.052: 100.0000% ( 1) 00:22:32.290 00:22:32.290 Complete histogram 00:22:32.290 ================== 00:22:32.290 Range in us Cumulative Count 00:22:32.290 8.271 - 8.320: 0.0747% ( 2) 00:22:32.290 8.320 - 8.369: 0.2240% ( 4) 00:22:32.290 8.369 - 8.418: 0.4481% ( 6) 00:22:32.290 8.418 - 8.468: 0.6348% ( 5) 00:22:32.290 8.468 - 8.517: 1.0082% ( 10) 00:22:32.290 8.517 - 8.566: 1.6430% ( 17) 00:22:32.290 8.566 - 8.615: 2.1285% ( 13) 00:22:32.290 8.615 - 8.665: 2.6886% ( 15) 00:22:32.290 8.665 - 8.714: 3.1740% ( 13) 00:22:32.290 8.714 - 8.763: 3.8088% ( 17) 00:22:32.290 8.763 - 8.812: 4.3316% ( 14) 00:22:32.290 8.812 - 8.862: 5.0037% ( 18) 00:22:32.290 8.862 - 8.911: 5.3398% ( 9) 00:22:32.290 8.911 - 8.960: 5.7132% ( 10) 00:22:32.290 8.960 - 9.009: 6.0493% ( 9) 00:22:32.290 9.009 - 9.058: 6.3107% ( 7) 00:22:32.290 9.058 - 9.108: 6.6468% ( 9) 00:22:32.290 9.108 - 9.157: 7.1322% ( 13) 00:22:32.290 9.157 - 9.206: 7.3562% ( 6) 00:22:32.290 9.206 - 9.255: 7.6176% ( 7) 00:22:32.290 9.255 - 9.305: 7.9164% ( 8) 00:22:32.290 9.305 - 9.354: 8.0284% ( 3) 00:22:32.290 9.354 - 9.403: 8.1404% ( 3) 00:22:32.290 9.403 - 9.452: 8.2898% ( 4) 00:22:32.290 9.452 - 9.502: 8.4018% ( 3) 00:22:32.290 9.502 - 9.551: 8.5885% ( 5) 00:22:32.290 9.551 - 9.600: 8.7005% ( 3) 00:22:32.290 9.600 - 9.649: 8.9619% ( 7) 00:22:32.290 9.649 - 9.698: 8.9993% ( 1) 00:22:32.290 9.698 - 9.748: 9.2980% ( 8) 00:22:32.290 9.748 - 9.797: 9.4847% ( 5) 00:22:32.290 9.797 - 9.846: 9.6341% ( 4) 00:22:32.290 9.846 - 9.895: 10.1942% ( 15) 00:22:32.290 9.895 - 9.945: 10.9783% ( 21) 00:22:32.290 9.945 - 9.994: 11.8745% ( 24) 00:22:32.290 9.994 - 10.043: 13.2562% ( 37) 00:22:32.290 10.043 - 10.092: 15.5340% ( 61) 00:22:32.290 10.092 - 10.142: 18.6706% ( 84) 00:22:32.290 10.142 - 10.191: 22.3301% ( 98) 00:22:32.290 10.191 - 10.240: 27.3712% ( 135) 00:22:32.290 10.240 - 10.289: 32.8230% ( 146) 00:22:32.290 10.289 - 10.338: 38.3495% ( 148) 00:22:32.290 10.338 - 10.388: 43.4279% ( 136) 00:22:32.290 10.388 - 10.437: 48.6931% ( 141) 00:22:32.290 10.437 - 10.486: 53.5101% ( 129) 00:22:32.290 10.486 - 10.535: 58.3271% ( 129) 00:22:32.290 10.535 - 10.585: 62.1733% ( 103) 00:22:32.290 10.585 - 10.634: 65.8327% ( 98) 00:22:32.290 10.634 - 10.683: 69.1934% ( 90) 00:22:32.290 10.683 - 10.732: 72.2181% ( 81) 00:22:32.290 10.732 - 10.782: 74.7199% ( 67) 00:22:32.290 10.782 - 10.831: 76.5870% ( 50) 00:22:32.290 10.831 - 10.880: 77.8566% ( 34) 00:22:32.290 10.880 - 10.929: 79.1636% ( 35) 00:22:32.290 10.929 - 10.978: 80.5452% ( 37) 00:22:32.290 10.978 - 11.028: 81.4040% ( 23) 00:22:32.290 11.028 - 11.077: 82.5243% ( 30) 00:22:32.290 11.077 - 11.126: 83.3458% ( 22) 00:22:32.290 11.126 - 11.175: 84.1673% ( 22) 00:22:32.290 11.175 - 11.225: 84.8021% ( 17) 00:22:32.290 11.225 - 11.274: 85.4369% ( 17) 00:22:32.290 11.274 - 11.323: 85.9597% ( 14) 00:22:32.290 11.323 - 11.372: 86.5198% ( 15) 00:22:32.290 11.372 - 11.422: 86.9305% ( 11) 00:22:32.290 11.422 - 11.471: 87.4907% ( 15) 00:22:32.290 11.471 - 11.520: 87.9761% ( 13) 00:22:32.290 11.520 - 11.569: 88.2748% ( 8) 00:22:32.290 11.569 - 11.618: 88.4989% ( 6) 00:22:32.290 11.618 - 11.668: 88.6856% ( 5) 00:22:32.290 11.668 - 11.717: 89.0963% ( 11) 00:22:32.290 11.717 - 11.766: 89.2457% ( 4) 00:22:32.290 11.766 - 11.815: 89.5071% ( 7) 00:22:32.290 11.815 - 11.865: 89.7311% ( 6) 00:22:32.290 11.865 - 11.914: 89.9552% ( 6) 00:22:32.290 11.914 - 11.963: 90.1046% ( 4) 00:22:32.290 12.062 - 12.111: 90.2539% ( 4) 00:22:32.290 12.111 - 12.160: 90.4033% ( 4) 00:22:32.290 12.160 - 12.209: 90.5527% ( 4) 00:22:32.290 12.209 - 12.258: 90.6273% ( 2) 00:22:32.290 12.258 - 12.308: 90.7020% ( 2) 00:22:32.290 12.308 - 12.357: 90.7767% ( 2) 00:22:32.290 12.357 - 12.406: 90.9634% ( 5) 00:22:32.290 12.406 - 12.455: 91.1501% ( 5) 00:22:32.290 12.455 - 12.505: 91.1875% ( 1) 00:22:32.290 12.554 - 12.603: 91.2621% ( 2) 00:22:32.290 12.603 - 12.702: 91.3742% ( 3) 00:22:32.290 12.702 - 12.800: 91.5235% ( 4) 00:22:32.290 12.800 - 12.898: 91.5982% ( 2) 00:22:32.290 12.898 - 12.997: 91.7849% ( 5) 00:22:32.290 12.997 - 13.095: 91.8969% ( 3) 00:22:32.290 13.095 - 13.194: 92.0090% ( 3) 00:22:32.290 13.292 - 13.391: 92.1210% ( 3) 00:22:32.290 13.391 - 13.489: 92.1583% ( 1) 00:22:32.290 13.489 - 13.588: 92.2704% ( 3) 00:22:32.290 13.588 - 13.686: 92.4197% ( 4) 00:22:32.290 13.686 - 13.785: 92.5317% ( 3) 00:22:32.290 13.785 - 13.883: 92.7931% ( 7) 00:22:32.290 13.982 - 14.080: 92.8678% ( 2) 00:22:32.290 14.080 - 14.178: 92.9798% ( 3) 00:22:32.290 14.178 - 14.277: 93.1292% ( 4) 00:22:32.290 14.375 - 14.474: 93.2039% ( 2) 00:22:32.290 14.474 - 14.572: 93.2786% ( 2) 00:22:32.290 14.671 - 14.769: 93.3159% ( 1) 00:22:32.290 14.769 - 14.868: 93.3906% ( 2) 00:22:32.290 14.868 - 14.966: 93.4279% ( 1) 00:22:32.290 14.966 - 15.065: 93.4653% ( 1) 00:22:32.290 15.163 - 15.262: 93.5773% ( 3) 00:22:32.290 15.262 - 15.360: 93.7267% ( 4) 00:22:32.290 15.360 - 15.458: 93.8013% ( 2) 00:22:32.290 15.458 - 15.557: 93.9881% ( 5) 00:22:32.290 15.557 - 15.655: 94.0627% ( 2) 00:22:32.290 15.754 - 15.852: 94.1001% ( 1) 00:22:32.290 15.852 - 15.951: 94.2121% ( 3) 00:22:32.290 15.951 - 16.049: 94.2494% ( 1) 00:22:32.290 16.049 - 16.148: 94.2868% ( 1) 00:22:32.290 16.246 - 16.345: 94.3241% ( 1) 00:22:32.290 16.345 - 16.443: 94.3615% ( 1) 00:22:32.290 16.443 - 16.542: 94.4361% ( 2) 00:22:32.290 16.640 - 16.738: 94.6229% ( 5) 00:22:32.290 16.738 - 16.837: 94.6602% ( 1) 00:22:32.290 16.837 - 16.935: 94.7349% ( 2) 00:22:32.290 16.935 - 17.034: 94.8096% ( 2) 00:22:32.290 17.034 - 17.132: 94.8469% ( 1) 00:22:32.290 17.132 - 17.231: 94.9589% ( 3) 00:22:32.290 17.329 - 17.428: 94.9963% ( 1) 00:22:32.290 17.428 - 17.526: 95.1830% ( 5) 00:22:32.290 17.526 - 17.625: 95.3697% ( 5) 00:22:32.290 17.625 - 17.723: 95.4070% ( 1) 00:22:32.290 17.723 - 17.822: 95.5564% ( 4) 00:22:32.290 17.822 - 17.920: 95.5937% ( 1) 00:22:32.290 17.920 - 18.018: 95.7804% ( 5) 00:22:32.290 18.018 - 18.117: 95.8178% ( 1) 00:22:32.290 18.117 - 18.215: 95.8551% ( 1) 00:22:32.290 18.215 - 18.314: 95.9671% ( 3) 00:22:32.290 18.314 - 18.412: 96.0418% ( 2) 00:22:32.290 18.511 - 18.609: 96.1165% ( 2) 00:22:32.290 18.609 - 18.708: 96.2285% ( 3) 00:22:32.290 18.708 - 18.806: 96.3032% ( 2) 00:22:32.290 18.806 - 18.905: 96.4899% ( 5) 00:22:32.290 18.905 - 19.003: 96.6019% ( 3) 00:22:32.290 19.003 - 19.102: 96.6393% ( 1) 00:22:32.290 19.102 - 19.200: 96.8260% ( 5) 00:22:32.290 19.200 - 19.298: 96.8633% ( 1) 00:22:32.290 19.298 - 19.397: 96.9007% ( 1) 00:22:32.290 19.495 - 19.594: 96.9380% ( 1) 00:22:32.290 19.594 - 19.692: 97.0500% ( 3) 00:22:32.290 19.791 - 19.889: 97.0874% ( 1) 00:22:32.290 19.889 - 19.988: 97.1621% ( 2) 00:22:32.290 19.988 - 20.086: 97.2367% ( 2) 00:22:32.290 20.185 - 20.283: 97.2741% ( 1) 00:22:32.290 20.283 - 20.382: 97.3114% ( 1) 00:22:32.290 20.382 - 20.480: 97.3488% ( 1) 00:22:32.290 20.578 - 20.677: 97.4235% ( 2) 00:22:32.290 20.677 - 20.775: 97.4981% ( 2) 00:22:32.290 20.775 - 20.874: 97.5355% ( 1) 00:22:32.290 20.972 - 21.071: 97.5728% ( 1) 00:22:32.290 21.071 - 21.169: 97.6102% ( 1) 00:22:32.290 21.169 - 21.268: 97.6848% ( 2) 00:22:32.290 21.465 - 21.563: 97.7595% ( 2) 00:22:32.290 21.760 - 21.858: 97.7969% ( 1) 00:22:32.290 21.858 - 21.957: 97.8342% ( 1) 00:22:32.290 22.055 - 22.154: 97.8715% ( 1) 00:22:32.290 22.449 - 22.548: 97.9089% ( 1) 00:22:32.290 22.745 - 22.843: 97.9836% ( 2) 00:22:32.290 23.040 - 23.138: 98.0583% ( 2) 00:22:32.290 23.237 - 23.335: 98.0956% ( 1) 00:22:32.290 23.434 - 23.532: 98.1329% ( 1) 00:22:32.290 23.532 - 23.631: 98.2076% ( 2) 00:22:32.290 23.631 - 23.729: 98.2450% ( 1) 00:22:32.290 23.729 - 23.828: 98.2823% ( 1) 00:22:32.290 23.926 - 24.025: 98.3196% ( 1) 00:22:32.290 24.222 - 24.320: 98.3943% ( 2) 00:22:32.290 24.320 - 24.418: 98.4690% ( 2) 00:22:32.290 24.418 - 24.517: 98.5063% ( 1) 00:22:32.290 24.615 - 24.714: 98.5437% ( 1) 00:22:32.290 24.812 - 24.911: 98.5810% ( 1) 00:22:32.290 24.911 - 25.009: 98.6184% ( 1) 00:22:32.290 25.108 - 25.206: 98.6557% ( 1) 00:22:32.290 25.206 - 25.403: 98.6931% ( 1) 00:22:32.290 25.600 - 25.797: 98.8051% ( 3) 00:22:32.291 25.797 - 25.994: 98.8424% ( 1) 00:22:32.291 25.994 - 26.191: 98.8798% ( 1) 00:22:32.291 26.191 - 26.388: 98.9544% ( 2) 00:22:32.291 26.388 - 26.585: 98.9918% ( 1) 00:22:32.291 26.978 - 27.175: 99.0291% ( 1) 00:22:32.291 27.372 - 27.569: 99.0665% ( 1) 00:22:32.291 27.569 - 27.766: 99.1038% ( 1) 00:22:32.291 28.160 - 28.357: 99.1412% ( 1) 00:22:32.291 28.357 - 28.554: 99.1785% ( 1) 00:22:32.291 28.948 - 29.145: 99.2158% ( 1) 00:22:32.291 29.145 - 29.342: 99.2905% ( 2) 00:22:32.291 29.342 - 29.538: 99.3279% ( 1) 00:22:32.291 29.932 - 30.129: 99.3652% ( 1) 00:22:32.291 31.705 - 31.902: 99.4025% ( 1) 00:22:32.291 32.492 - 32.689: 99.4399% ( 1) 00:22:32.291 33.083 - 33.280: 99.5146% ( 2) 00:22:32.291 33.477 - 33.674: 99.5892% ( 2) 00:22:32.291 34.068 - 34.265: 99.6639% ( 2) 00:22:32.291 36.037 - 36.234: 99.7013% ( 1) 00:22:32.291 36.825 - 37.022: 99.7386% ( 1) 00:22:32.291 38.203 - 38.400: 99.7760% ( 1) 00:22:32.291 38.991 - 39.188: 99.8133% ( 1) 00:22:32.291 39.582 - 39.778: 99.8506% ( 1) 00:22:32.291 43.126 - 43.323: 99.8880% ( 1) 00:22:32.291 45.686 - 45.883: 99.9253% ( 1) 00:22:32.291 111.852 - 112.640: 99.9627% ( 1) 00:22:32.291 137.846 - 138.634: 100.0000% ( 1) 00:22:32.291 00:22:32.291 ************************************ 00:22:32.291 END TEST nvme_overhead 00:22:32.291 ************************************ 00:22:32.291 00:22:32.291 real 0m1.277s 00:22:32.291 user 0m1.079s 00:22:32.291 sys 0m0.129s 00:22:32.291 23:04:10 nvme.nvme_overhead -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:32.291 23:04:10 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:22:32.291 23:04:10 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:22:32.291 23:04:10 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:22:32.291 23:04:10 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:32.291 23:04:10 nvme -- common/autotest_common.sh@10 -- # set +x 00:22:32.291 ************************************ 00:22:32.291 START TEST nvme_arbitration 00:22:32.291 ************************************ 00:22:32.291 23:04:10 nvme.nvme_arbitration -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:22:35.626 Initializing NVMe Controllers 00:22:35.626 Attached to 0000:00:11.0 00:22:35.626 Attached to 0000:00:13.0 00:22:35.626 Attached to 0000:00:10.0 00:22:35.626 Attached to 0000:00:12.0 00:22:35.626 Associating QEMU NVMe Ctrl (12341 ) with lcore 0 00:22:35.626 Associating QEMU NVMe Ctrl (12343 ) with lcore 1 00:22:35.626 Associating QEMU NVMe Ctrl (12340 ) with lcore 2 00:22:35.626 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:22:35.626 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:22:35.626 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:22:35.626 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:22:35.626 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:22:35.626 Initialization complete. Launching workers. 00:22:35.626 Starting thread on core 1 with urgent priority queue 00:22:35.626 Starting thread on core 2 with urgent priority queue 00:22:35.626 Starting thread on core 3 with urgent priority queue 00:22:35.626 Starting thread on core 0 with urgent priority queue 00:22:35.626 QEMU NVMe Ctrl (12341 ) core 0: 725.33 IO/s 137.87 secs/100000 ios 00:22:35.626 QEMU NVMe Ctrl (12342 ) core 0: 725.33 IO/s 137.87 secs/100000 ios 00:22:35.626 QEMU NVMe Ctrl (12343 ) core 1: 725.33 IO/s 137.87 secs/100000 ios 00:22:35.626 QEMU NVMe Ctrl (12342 ) core 1: 725.33 IO/s 137.87 secs/100000 ios 00:22:35.626 QEMU NVMe Ctrl (12340 ) core 2: 725.33 IO/s 137.87 secs/100000 ios 00:22:35.626 QEMU NVMe Ctrl (12342 ) core 3: 725.33 IO/s 137.87 secs/100000 ios 00:22:35.626 ======================================================== 00:22:35.626 00:22:35.626 ************************************ 00:22:35.626 END TEST nvme_arbitration 00:22:35.626 ************************************ 00:22:35.626 00:22:35.626 real 0m3.347s 00:22:35.626 user 0m9.207s 00:22:35.626 sys 0m0.153s 00:22:35.626 23:04:13 nvme.nvme_arbitration -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:35.626 23:04:13 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:22:35.626 23:04:13 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:22:35.626 23:04:13 nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:22:35.626 23:04:13 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:35.626 23:04:13 nvme -- common/autotest_common.sh@10 -- # set +x 00:22:35.626 ************************************ 00:22:35.626 START TEST nvme_single_aen 00:22:35.626 ************************************ 00:22:35.626 23:04:13 nvme.nvme_single_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:22:35.886 Asynchronous Event Request test 00:22:35.886 Attached to 0000:00:11.0 00:22:35.886 Attached to 0000:00:13.0 00:22:35.886 Attached to 0000:00:10.0 00:22:35.886 Attached to 0000:00:12.0 00:22:35.886 Reset controller to setup AER completions for this process 00:22:35.886 Registering asynchronous event callbacks... 00:22:35.886 Getting orig temperature thresholds of all controllers 00:22:35.886 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:22:35.886 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:22:35.886 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:22:35.886 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:22:35.886 Setting all controllers temperature threshold low to trigger AER 00:22:35.886 Waiting for all controllers temperature threshold to be set lower 00:22:35.886 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:22:35.886 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:22:35.886 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:22:35.886 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:22:35.886 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:22:35.886 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:22:35.886 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:22:35.886 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:22:35.886 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:22:35.886 Waiting for all controllers to trigger AER and reset threshold 00:22:35.886 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:22:35.886 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:22:35.886 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:22:35.886 Cleaning up... 00:22:35.886 ************************************ 00:22:35.886 END TEST nvme_single_aen 00:22:35.886 ************************************ 00:22:35.886 00:22:35.886 real 0m0.308s 00:22:35.886 user 0m0.106s 00:22:35.886 sys 0m0.142s 00:22:35.886 23:04:14 nvme.nvme_single_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:35.886 23:04:14 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:22:35.886 23:04:14 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:22:35.886 23:04:14 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:35.886 23:04:14 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:35.887 23:04:14 nvme -- common/autotest_common.sh@10 -- # set +x 00:22:35.887 ************************************ 00:22:35.887 START TEST nvme_doorbell_aers 00:22:35.887 ************************************ 00:22:35.887 23:04:14 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1129 -- # nvme_doorbell_aers 00:22:35.887 23:04:14 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:22:35.887 23:04:14 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:22:35.887 23:04:14 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:22:35.887 23:04:14 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:22:35.887 23:04:14 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # bdfs=() 00:22:35.887 23:04:14 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # local bdfs 00:22:35.887 23:04:14 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:22:35.887 23:04:14 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:22:35.887 23:04:14 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:22:35.887 23:04:14 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:22:35.887 23:04:14 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:22:35.887 23:04:14 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:22:36.149 23:04:14 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:22:36.149 [2024-12-09 23:04:14.592288] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63560) is not found. Dropping the request. 00:22:46.199 Executing: test_write_invalid_db 00:22:46.199 Waiting for AER completion... 00:22:46.199 Failure: test_write_invalid_db 00:22:46.199 00:22:46.199 Executing: test_invalid_db_write_overflow_sq 00:22:46.199 Waiting for AER completion... 00:22:46.199 Failure: test_invalid_db_write_overflow_sq 00:22:46.199 00:22:46.199 Executing: test_invalid_db_write_overflow_cq 00:22:46.199 Waiting for AER completion... 00:22:46.199 Failure: test_invalid_db_write_overflow_cq 00:22:46.199 00:22:46.199 23:04:24 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:22:46.199 23:04:24 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:22:46.199 [2024-12-09 23:04:24.625373] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63560) is not found. Dropping the request. 00:22:56.259 Executing: test_write_invalid_db 00:22:56.259 Waiting for AER completion... 00:22:56.259 Failure: test_write_invalid_db 00:22:56.259 00:22:56.259 Executing: test_invalid_db_write_overflow_sq 00:22:56.259 Waiting for AER completion... 00:22:56.259 Failure: test_invalid_db_write_overflow_sq 00:22:56.259 00:22:56.259 Executing: test_invalid_db_write_overflow_cq 00:22:56.259 Waiting for AER completion... 00:22:56.259 Failure: test_invalid_db_write_overflow_cq 00:22:56.259 00:22:56.259 23:04:34 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:22:56.259 23:04:34 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:22:56.259 [2024-12-09 23:04:34.696952] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63560) is not found. Dropping the request. 00:23:06.284 Executing: test_write_invalid_db 00:23:06.284 Waiting for AER completion... 00:23:06.284 Failure: test_write_invalid_db 00:23:06.284 00:23:06.284 Executing: test_invalid_db_write_overflow_sq 00:23:06.284 Waiting for AER completion... 00:23:06.284 Failure: test_invalid_db_write_overflow_sq 00:23:06.284 00:23:06.284 Executing: test_invalid_db_write_overflow_cq 00:23:06.284 Waiting for AER completion... 00:23:06.284 Failure: test_invalid_db_write_overflow_cq 00:23:06.284 00:23:06.284 23:04:44 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:23:06.284 23:04:44 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:23:06.284 [2024-12-09 23:04:44.669089] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63560) is not found. Dropping the request. 00:23:16.268 Executing: test_write_invalid_db 00:23:16.268 Waiting for AER completion... 00:23:16.268 Failure: test_write_invalid_db 00:23:16.268 00:23:16.268 Executing: test_invalid_db_write_overflow_sq 00:23:16.268 Waiting for AER completion... 00:23:16.268 Failure: test_invalid_db_write_overflow_sq 00:23:16.268 00:23:16.268 Executing: test_invalid_db_write_overflow_cq 00:23:16.268 Waiting for AER completion... 00:23:16.268 Failure: test_invalid_db_write_overflow_cq 00:23:16.268 00:23:16.268 ************************************ 00:23:16.268 END TEST nvme_doorbell_aers 00:23:16.268 ************************************ 00:23:16.268 00:23:16.268 real 0m40.227s 00:23:16.268 user 0m34.005s 00:23:16.268 sys 0m5.779s 00:23:16.268 23:04:54 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:16.268 23:04:54 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:23:16.268 23:04:54 nvme -- nvme/nvme.sh@97 -- # uname 00:23:16.268 23:04:54 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:23:16.268 23:04:54 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:23:16.268 23:04:54 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:23:16.268 23:04:54 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:16.268 23:04:54 nvme -- common/autotest_common.sh@10 -- # set +x 00:23:16.268 ************************************ 00:23:16.268 START TEST nvme_multi_aen 00:23:16.268 ************************************ 00:23:16.268 23:04:54 nvme.nvme_multi_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:23:16.268 [2024-12-09 23:04:54.728646] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63560) is not found. Dropping the request. 00:23:16.268 [2024-12-09 23:04:54.728720] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63560) is not found. Dropping the request. 00:23:16.268 [2024-12-09 23:04:54.728732] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63560) is not found. Dropping the request. 00:23:16.544 [2024-12-09 23:04:54.730456] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63560) is not found. Dropping the request. 00:23:16.544 [2024-12-09 23:04:54.730502] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63560) is not found. Dropping the request. 00:23:16.544 [2024-12-09 23:04:54.730511] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63560) is not found. Dropping the request. 00:23:16.544 [2024-12-09 23:04:54.731581] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63560) is not found. Dropping the request. 00:23:16.544 [2024-12-09 23:04:54.731610] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63560) is not found. Dropping the request. 00:23:16.544 [2024-12-09 23:04:54.731618] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63560) is not found. Dropping the request. 00:23:16.544 [2024-12-09 23:04:54.732698] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63560) is not found. Dropping the request. 00:23:16.544 [2024-12-09 23:04:54.732727] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63560) is not found. Dropping the request. 00:23:16.544 [2024-12-09 23:04:54.732735] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63560) is not found. Dropping the request. 00:23:16.544 Child process pid: 64081 00:23:16.544 [Child] Asynchronous Event Request test 00:23:16.544 [Child] Attached to 0000:00:11.0 00:23:16.544 [Child] Attached to 0000:00:13.0 00:23:16.544 [Child] Attached to 0000:00:10.0 00:23:16.544 [Child] Attached to 0000:00:12.0 00:23:16.544 [Child] Registering asynchronous event callbacks... 00:23:16.544 [Child] Getting orig temperature thresholds of all controllers 00:23:16.544 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:23:16.544 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:23:16.545 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:23:16.545 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:23:16.545 [Child] Waiting for all controllers to trigger AER and reset threshold 00:23:16.545 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:23:16.545 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:23:16.545 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:23:16.545 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:23:16.545 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:23:16.545 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:23:16.545 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:23:16.545 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:23:16.545 [Child] Cleaning up... 00:23:16.545 Asynchronous Event Request test 00:23:16.545 Attached to 0000:00:11.0 00:23:16.545 Attached to 0000:00:13.0 00:23:16.545 Attached to 0000:00:10.0 00:23:16.545 Attached to 0000:00:12.0 00:23:16.545 Reset controller to setup AER completions for this process 00:23:16.545 Registering asynchronous event callbacks... 00:23:16.545 Getting orig temperature thresholds of all controllers 00:23:16.545 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:23:16.545 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:23:16.545 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:23:16.545 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:23:16.545 Setting all controllers temperature threshold low to trigger AER 00:23:16.545 Waiting for all controllers temperature threshold to be set lower 00:23:16.545 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:23:16.545 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:23:16.545 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:23:16.545 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:23:16.545 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:23:16.545 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:23:16.545 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:23:16.545 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:23:16.545 Waiting for all controllers to trigger AER and reset threshold 00:23:16.545 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:23:16.545 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:23:16.545 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:23:16.545 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:23:16.545 Cleaning up... 00:23:16.545 ************************************ 00:23:16.545 END TEST nvme_multi_aen 00:23:16.545 ************************************ 00:23:16.545 00:23:16.545 real 0m0.432s 00:23:16.545 user 0m0.138s 00:23:16.545 sys 0m0.173s 00:23:16.545 23:04:54 nvme.nvme_multi_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:16.545 23:04:54 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:23:16.818 23:04:55 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:23:16.818 23:04:55 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:23:16.818 23:04:55 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:16.818 23:04:55 nvme -- common/autotest_common.sh@10 -- # set +x 00:23:16.818 ************************************ 00:23:16.818 START TEST nvme_startup 00:23:16.818 ************************************ 00:23:16.818 23:04:55 nvme.nvme_startup -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:23:16.818 Initializing NVMe Controllers 00:23:16.818 Attached to 0000:00:11.0 00:23:16.818 Attached to 0000:00:13.0 00:23:16.818 Attached to 0000:00:10.0 00:23:16.818 Attached to 0000:00:12.0 00:23:16.818 Initialization complete. 00:23:16.818 Time used:155350.234 (us). 00:23:16.818 ************************************ 00:23:16.818 END TEST nvme_startup 00:23:16.818 ************************************ 00:23:16.818 00:23:16.818 real 0m0.218s 00:23:16.818 user 0m0.069s 00:23:16.818 sys 0m0.095s 00:23:16.818 23:04:55 nvme.nvme_startup -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:16.818 23:04:55 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:23:16.818 23:04:55 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:23:16.818 23:04:55 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:16.818 23:04:55 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:16.818 23:04:55 nvme -- common/autotest_common.sh@10 -- # set +x 00:23:16.818 ************************************ 00:23:16.818 START TEST nvme_multi_secondary 00:23:16.818 ************************************ 00:23:16.818 23:04:55 nvme.nvme_multi_secondary -- common/autotest_common.sh@1129 -- # nvme_multi_secondary 00:23:16.818 23:04:55 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=64137 00:23:16.818 23:04:55 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:23:16.818 23:04:55 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=64138 00:23:16.818 23:04:55 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:23:16.818 23:04:55 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:23:21.000 Initializing NVMe Controllers 00:23:21.000 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:23:21.000 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:23:21.000 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:23:21.000 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:23:21.000 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:23:21.000 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:23:21.000 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:23:21.000 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:23:21.000 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:23:21.000 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:23:21.000 Initialization complete. Launching workers. 00:23:21.000 ======================================================== 00:23:21.000 Latency(us) 00:23:21.000 Device Information : IOPS MiB/s Average min max 00:23:21.000 PCIE (0000:00:11.0) NSID 1 from core 2: 3028.08 11.83 5283.46 1474.81 12448.86 00:23:21.000 PCIE (0000:00:13.0) NSID 1 from core 2: 3028.08 11.83 5282.99 1432.40 13315.51 00:23:21.000 PCIE (0000:00:10.0) NSID 1 from core 2: 3028.08 11.83 5281.96 1474.52 12817.92 00:23:21.000 PCIE (0000:00:12.0) NSID 1 from core 2: 3028.08 11.83 5283.34 1391.51 13081.60 00:23:21.000 PCIE (0000:00:12.0) NSID 2 from core 2: 3028.08 11.83 5283.74 1472.05 12962.78 00:23:21.000 PCIE (0000:00:12.0) NSID 3 from core 2: 3028.08 11.83 5283.71 1334.36 12945.47 00:23:21.000 ======================================================== 00:23:21.000 Total : 18168.47 70.97 5283.20 1334.36 13315.51 00:23:21.000 00:23:21.000 Initializing NVMe Controllers 00:23:21.000 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:23:21.000 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:23:21.000 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:23:21.000 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:23:21.000 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:23:21.000 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:23:21.000 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:23:21.000 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:23:21.000 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:23:21.000 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:23:21.000 Initialization complete. Launching workers. 00:23:21.000 ======================================================== 00:23:21.000 Latency(us) 00:23:21.000 Device Information : IOPS MiB/s Average min max 00:23:21.000 PCIE (0000:00:11.0) NSID 1 from core 1: 7211.61 28.17 2218.19 1085.88 6855.07 00:23:21.000 PCIE (0000:00:13.0) NSID 1 from core 1: 7211.61 28.17 2218.21 1088.62 6696.49 00:23:21.000 PCIE (0000:00:10.0) NSID 1 from core 1: 7211.61 28.17 2217.19 1033.96 6629.50 00:23:21.000 PCIE (0000:00:12.0) NSID 1 from core 1: 7211.61 28.17 2218.21 1084.27 6836.91 00:23:21.000 PCIE (0000:00:12.0) NSID 2 from core 1: 7211.61 28.17 2218.14 1079.19 6539.87 00:23:21.000 PCIE (0000:00:12.0) NSID 3 from core 1: 7211.61 28.17 2218.09 1033.81 6485.28 00:23:21.000 ======================================================== 00:23:21.000 Total : 43269.68 169.02 2218.01 1033.81 6855.07 00:23:21.000 00:23:21.000 23:04:58 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 64137 00:23:22.371 Initializing NVMe Controllers 00:23:22.371 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:23:22.371 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:23:22.371 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:23:22.371 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:23:22.371 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:23:22.371 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:23:22.371 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:23:22.371 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:23:22.371 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:23:22.371 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:23:22.371 Initialization complete. Launching workers. 00:23:22.371 ======================================================== 00:23:22.371 Latency(us) 00:23:22.371 Device Information : IOPS MiB/s Average min max 00:23:22.371 PCIE (0000:00:11.0) NSID 1 from core 0: 9954.48 38.88 1606.94 709.47 8178.69 00:23:22.371 PCIE (0000:00:13.0) NSID 1 from core 0: 9954.48 38.88 1606.96 705.24 8904.05 00:23:22.371 PCIE (0000:00:10.0) NSID 1 from core 0: 9954.48 38.88 1606.06 684.95 7562.50 00:23:22.371 PCIE (0000:00:12.0) NSID 1 from core 0: 9954.48 38.88 1607.01 709.64 7433.95 00:23:22.371 PCIE (0000:00:12.0) NSID 2 from core 0: 9954.48 38.88 1607.02 699.55 8011.77 00:23:22.371 PCIE (0000:00:12.0) NSID 3 from core 0: 9954.48 38.88 1607.06 719.87 8188.89 00:23:22.371 ======================================================== 00:23:22.371 Total : 59726.90 233.31 1606.84 684.95 8904.05 00:23:22.371 00:23:22.371 23:05:00 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 64138 00:23:22.371 23:05:00 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=64208 00:23:22.371 23:05:00 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:23:22.371 23:05:00 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=64209 00:23:22.371 23:05:00 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:23:22.371 23:05:00 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:23:25.650 Initializing NVMe Controllers 00:23:25.650 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:23:25.650 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:23:25.650 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:23:25.650 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:23:25.650 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:23:25.650 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:23:25.650 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:23:25.650 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:23:25.650 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:23:25.650 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:23:25.650 Initialization complete. Launching workers. 00:23:25.650 ======================================================== 00:23:25.650 Latency(us) 00:23:25.650 Device Information : IOPS MiB/s Average min max 00:23:25.650 PCIE (0000:00:11.0) NSID 1 from core 0: 7547.37 29.48 2119.43 952.37 5776.91 00:23:25.650 PCIE (0000:00:13.0) NSID 1 from core 0: 7547.37 29.48 2119.43 988.05 5753.06 00:23:25.650 PCIE (0000:00:10.0) NSID 1 from core 0: 7547.37 29.48 2118.34 858.93 6372.86 00:23:25.650 PCIE (0000:00:12.0) NSID 1 from core 0: 7547.37 29.48 2119.26 980.03 6315.22 00:23:25.650 PCIE (0000:00:12.0) NSID 2 from core 0: 7547.37 29.48 2119.12 966.90 6480.95 00:23:25.650 PCIE (0000:00:12.0) NSID 3 from core 0: 7547.37 29.48 2119.05 906.66 6386.54 00:23:25.650 ======================================================== 00:23:25.650 Total : 45284.23 176.89 2119.10 858.93 6480.95 00:23:25.650 00:23:25.650 Initializing NVMe Controllers 00:23:25.650 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:23:25.650 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:23:25.650 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:23:25.650 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:23:25.650 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:23:25.650 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:23:25.650 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:23:25.650 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:23:25.650 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:23:25.650 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:23:25.650 Initialization complete. Launching workers. 00:23:25.650 ======================================================== 00:23:25.650 Latency(us) 00:23:25.650 Device Information : IOPS MiB/s Average min max 00:23:25.650 PCIE (0000:00:11.0) NSID 1 from core 1: 7439.91 29.06 2150.09 816.75 5882.67 00:23:25.650 PCIE (0000:00:13.0) NSID 1 from core 1: 7439.91 29.06 2150.05 819.63 6824.85 00:23:25.650 PCIE (0000:00:10.0) NSID 1 from core 1: 7439.91 29.06 2148.95 770.87 6499.02 00:23:25.650 PCIE (0000:00:12.0) NSID 1 from core 1: 7439.91 29.06 2149.93 759.79 6755.42 00:23:25.650 PCIE (0000:00:12.0) NSID 2 from core 1: 7439.91 29.06 2149.86 648.35 6390.43 00:23:25.650 PCIE (0000:00:12.0) NSID 3 from core 1: 7439.91 29.06 2149.81 623.59 6295.21 00:23:25.650 ======================================================== 00:23:25.650 Total : 44639.43 174.37 2149.78 623.59 6824.85 00:23:25.650 00:23:27.548 Initializing NVMe Controllers 00:23:27.548 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:23:27.548 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:23:27.548 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:23:27.548 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:23:27.548 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:23:27.548 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:23:27.548 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:23:27.548 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:23:27.548 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:23:27.548 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:23:27.548 Initialization complete. Launching workers. 00:23:27.548 ======================================================== 00:23:27.548 Latency(us) 00:23:27.548 Device Information : IOPS MiB/s Average min max 00:23:27.548 PCIE (0000:00:11.0) NSID 1 from core 2: 4399.17 17.18 3636.32 836.30 12935.35 00:23:27.548 PCIE (0000:00:13.0) NSID 1 from core 2: 4399.17 17.18 3635.19 834.85 12901.00 00:23:27.548 PCIE (0000:00:10.0) NSID 1 from core 2: 4399.17 17.18 3631.77 811.23 12293.03 00:23:27.548 PCIE (0000:00:12.0) NSID 1 from core 2: 4399.17 17.18 3633.44 798.67 12707.42 00:23:27.548 PCIE (0000:00:12.0) NSID 2 from core 2: 4399.17 17.18 3633.57 812.60 13208.49 00:23:27.548 PCIE (0000:00:12.0) NSID 3 from core 2: 4399.17 17.18 3633.14 818.39 12629.48 00:23:27.548 ======================================================== 00:23:27.548 Total : 26395.02 103.11 3633.90 798.67 13208.49 00:23:27.548 00:23:27.548 ************************************ 00:23:27.548 END TEST nvme_multi_secondary 00:23:27.548 ************************************ 00:23:27.548 23:05:05 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 64208 00:23:27.548 23:05:05 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 64209 00:23:27.548 00:23:27.548 real 0m10.713s 00:23:27.548 user 0m18.428s 00:23:27.548 sys 0m0.643s 00:23:27.548 23:05:05 nvme.nvme_multi_secondary -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:27.548 23:05:05 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:23:27.807 23:05:06 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:23:27.807 23:05:06 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:23:27.807 23:05:06 nvme -- common/autotest_common.sh@1093 -- # [[ -e /proc/63152 ]] 00:23:27.807 23:05:06 nvme -- common/autotest_common.sh@1094 -- # kill 63152 00:23:27.807 23:05:06 nvme -- common/autotest_common.sh@1095 -- # wait 63152 00:23:27.807 [2024-12-09 23:05:06.017624] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64080) is not found. Dropping the request. 00:23:27.807 [2024-12-09 23:05:06.017717] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64080) is not found. Dropping the request. 00:23:27.807 [2024-12-09 23:05:06.017756] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64080) is not found. Dropping the request. 00:23:27.807 [2024-12-09 23:05:06.017779] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64080) is not found. Dropping the request. 00:23:27.807 [2024-12-09 23:05:06.020811] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64080) is not found. Dropping the request. 00:23:27.807 [2024-12-09 23:05:06.020879] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64080) is not found. Dropping the request. 00:23:27.807 [2024-12-09 23:05:06.020901] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64080) is not found. Dropping the request. 00:23:27.807 [2024-12-09 23:05:06.020927] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64080) is not found. Dropping the request. 00:23:27.807 [2024-12-09 23:05:06.022704] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64080) is not found. Dropping the request. 00:23:27.807 [2024-12-09 23:05:06.022740] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64080) is not found. Dropping the request. 00:23:27.807 [2024-12-09 23:05:06.022749] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64080) is not found. Dropping the request. 00:23:27.807 [2024-12-09 23:05:06.022760] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64080) is not found. Dropping the request. 00:23:27.807 [2024-12-09 23:05:06.024195] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64080) is not found. Dropping the request. 00:23:27.807 [2024-12-09 23:05:06.024247] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64080) is not found. Dropping the request. 00:23:27.807 [2024-12-09 23:05:06.024259] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64080) is not found. Dropping the request. 00:23:27.807 [2024-12-09 23:05:06.024271] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64080) is not found. Dropping the request. 00:23:27.807 23:05:06 nvme -- common/autotest_common.sh@1097 -- # rm -f /var/run/spdk_stub0 00:23:27.807 23:05:06 nvme -- common/autotest_common.sh@1101 -- # echo 2 00:23:27.807 23:05:06 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:23:27.807 23:05:06 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:27.807 23:05:06 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:27.807 23:05:06 nvme -- common/autotest_common.sh@10 -- # set +x 00:23:27.807 ************************************ 00:23:27.807 START TEST bdev_nvme_reset_stuck_adm_cmd 00:23:27.807 ************************************ 00:23:27.808 23:05:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:23:27.808 * Looking for test storage... 00:23:27.808 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:23:27.808 23:05:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:27.808 23:05:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:27.808 23:05:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # lcov --version 00:23:28.064 23:05:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:28.064 23:05:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:28.064 23:05:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:28.064 23:05:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:28.064 23:05:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:23:28.064 23:05:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:23:28.064 23:05:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:23:28.064 23:05:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:23:28.064 23:05:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:23:28.064 23:05:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:23:28.064 23:05:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:23:28.064 23:05:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:28.064 23:05:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:23:28.064 23:05:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:23:28.064 23:05:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:28.064 23:05:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:28.064 23:05:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:23:28.064 23:05:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:23:28.064 23:05:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:28.064 23:05:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:23:28.064 23:05:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:23:28.064 23:05:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:23:28.064 23:05:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:23:28.064 23:05:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:28.064 23:05:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:23:28.064 23:05:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:23:28.064 23:05:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:28.064 23:05:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:28.064 23:05:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:23:28.064 23:05:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:28.064 23:05:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:28.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:28.064 --rc genhtml_branch_coverage=1 00:23:28.064 --rc genhtml_function_coverage=1 00:23:28.064 --rc genhtml_legend=1 00:23:28.064 --rc geninfo_all_blocks=1 00:23:28.064 --rc geninfo_unexecuted_blocks=1 00:23:28.064 00:23:28.064 ' 00:23:28.064 23:05:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:28.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:28.064 --rc genhtml_branch_coverage=1 00:23:28.064 --rc genhtml_function_coverage=1 00:23:28.064 --rc genhtml_legend=1 00:23:28.064 --rc geninfo_all_blocks=1 00:23:28.064 --rc geninfo_unexecuted_blocks=1 00:23:28.064 00:23:28.064 ' 00:23:28.064 23:05:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:28.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:28.064 --rc genhtml_branch_coverage=1 00:23:28.064 --rc genhtml_function_coverage=1 00:23:28.064 --rc genhtml_legend=1 00:23:28.064 --rc geninfo_all_blocks=1 00:23:28.064 --rc geninfo_unexecuted_blocks=1 00:23:28.064 00:23:28.064 ' 00:23:28.064 23:05:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:28.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:28.064 --rc genhtml_branch_coverage=1 00:23:28.064 --rc genhtml_function_coverage=1 00:23:28.064 --rc genhtml_legend=1 00:23:28.064 --rc geninfo_all_blocks=1 00:23:28.064 --rc geninfo_unexecuted_blocks=1 00:23:28.064 00:23:28.064 ' 00:23:28.064 23:05:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:23:28.064 23:05:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:23:28.064 23:05:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:23:28.064 23:05:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:23:28.064 23:05:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:23:28.064 23:05:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:23:28.064 23:05:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # bdfs=() 00:23:28.064 23:05:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # local bdfs 00:23:28.064 23:05:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:23:28.064 23:05:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:23:28.064 23:05:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # bdfs=() 00:23:28.064 23:05:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # local bdfs 00:23:28.064 23:05:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:23:28.064 23:05:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:23:28.064 23:05:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:23:28.064 23:05:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:23:28.064 23:05:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:23:28.064 23:05:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:23:28.064 23:05:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:23:28.064 23:05:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:23:28.064 23:05:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=64371 00:23:28.064 23:05:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:23:28.064 23:05:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:23:28.064 23:05:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 64371 00:23:28.064 23:05:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # '[' -z 64371 ']' 00:23:28.064 23:05:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:28.064 23:05:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:28.064 23:05:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:28.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:28.064 23:05:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:28.064 23:05:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:23:28.064 [2024-12-09 23:05:06.416204] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:23:28.065 [2024-12-09 23:05:06.416340] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64371 ] 00:23:28.321 [2024-12-09 23:05:06.588383] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:28.321 [2024-12-09 23:05:06.696506] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:28.321 [2024-12-09 23:05:06.696614] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:28.321 [2024-12-09 23:05:06.696867] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:28.321 [2024-12-09 23:05:06.697031] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:28.887 23:05:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:28.887 23:05:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@868 -- # return 0 00:23:28.887 23:05:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:23:28.887 23:05:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.887 23:05:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:23:29.144 nvme0n1 00:23:29.144 23:05:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.144 23:05:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:23:29.144 23:05:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_zBIXV.txt 00:23:29.144 23:05:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:23:29.144 23:05:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.144 23:05:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:23:29.144 true 00:23:29.144 23:05:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.144 23:05:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:23:29.144 23:05:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1733785507 00:23:29.144 23:05:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=64394 00:23:29.144 23:05:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:23:29.144 23:05:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:23:29.144 23:05:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:23:31.059 23:05:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:23:31.059 23:05:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.059 23:05:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:23:31.059 [2024-12-09 23:05:09.406734] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:23:31.059 [2024-12-09 23:05:09.407342] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:31.059 [2024-12-09 23:05:09.407388] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:23:31.059 [2024-12-09 23:05:09.407403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.059 [2024-12-09 23:05:09.409005] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:23:31.059 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 64394 00:23:31.059 23:05:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.059 23:05:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 64394 00:23:31.059 23:05:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 64394 00:23:31.059 23:05:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:23:31.059 23:05:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:23:31.059 23:05:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:31.059 23:05:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.059 23:05:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:23:31.059 23:05:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.059 23:05:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:23:31.059 23:05:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_zBIXV.txt 00:23:31.059 23:05:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:23:31.059 23:05:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:23:31.059 23:05:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:23:31.059 23:05:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:23:31.059 23:05:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:23:31.059 23:05:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:23:31.059 23:05:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:23:31.059 23:05:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:23:31.059 23:05:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:23:31.059 23:05:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:23:31.059 23:05:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:23:31.059 23:05:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:23:31.059 23:05:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:23:31.059 23:05:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:23:31.059 23:05:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:23:31.059 23:05:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:23:31.059 23:05:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:23:31.059 23:05:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:23:31.059 23:05:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:23:31.059 23:05:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_zBIXV.txt 00:23:31.059 23:05:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 64371 00:23:31.059 23:05:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # '[' -z 64371 ']' 00:23:31.059 23:05:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # kill -0 64371 00:23:31.059 23:05:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # uname 00:23:31.059 23:05:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:31.059 23:05:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64371 00:23:31.339 killing process with pid 64371 00:23:31.339 23:05:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:31.339 23:05:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:31.339 23:05:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64371' 00:23:31.339 23:05:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@973 -- # kill 64371 00:23:31.339 23:05:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@978 -- # wait 64371 00:23:32.711 23:05:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:23:32.711 23:05:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:23:32.711 ************************************ 00:23:32.711 END TEST bdev_nvme_reset_stuck_adm_cmd 00:23:32.711 ************************************ 00:23:32.711 00:23:32.711 real 0m4.838s 00:23:32.711 user 0m17.254s 00:23:32.711 sys 0m0.492s 00:23:32.711 23:05:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:32.711 23:05:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:23:32.711 23:05:11 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:23:32.711 23:05:11 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:23:32.711 23:05:11 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:32.711 23:05:11 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:32.711 23:05:11 nvme -- common/autotest_common.sh@10 -- # set +x 00:23:32.711 ************************************ 00:23:32.711 START TEST nvme_fio 00:23:32.711 ************************************ 00:23:32.711 23:05:11 nvme.nvme_fio -- common/autotest_common.sh@1129 -- # nvme_fio_test 00:23:32.711 23:05:11 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:23:32.711 23:05:11 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:23:32.711 23:05:11 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:23:32.711 23:05:11 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # bdfs=() 00:23:32.711 23:05:11 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # local bdfs 00:23:32.711 23:05:11 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:23:32.711 23:05:11 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:23:32.711 23:05:11 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:23:32.711 23:05:11 nvme.nvme_fio -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:23:32.711 23:05:11 nvme.nvme_fio -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:23:32.711 23:05:11 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:23:32.711 23:05:11 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:23:32.711 23:05:11 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:23:32.711 23:05:11 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:23:32.711 23:05:11 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:23:32.974 23:05:11 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:23:32.975 23:05:11 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:23:33.234 23:05:11 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:23:33.234 23:05:11 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:23:33.234 23:05:11 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:23:33.234 23:05:11 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:23:33.234 23:05:11 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:33.234 23:05:11 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:23:33.234 23:05:11 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:23:33.234 23:05:11 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:23:33.234 23:05:11 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:23:33.234 23:05:11 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:33.234 23:05:11 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:23:33.234 23:05:11 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:23:33.234 23:05:11 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:33.234 23:05:11 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:23:33.234 23:05:11 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:23:33.234 23:05:11 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:23:33.234 23:05:11 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:23:33.234 23:05:11 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:23:33.491 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:23:33.491 fio-3.35 00:23:33.491 Starting 1 thread 00:23:40.104 00:23:40.104 test: (groupid=0, jobs=1): err= 0: pid=64529: Mon Dec 9 23:05:18 2024 00:23:40.104 read: IOPS=22.6k, BW=88.3MiB/s (92.6MB/s)(177MiB/2001msec) 00:23:40.104 slat (nsec): min=3356, max=71564, avg=5047.82, stdev=2349.57 00:23:40.104 clat (usec): min=256, max=7859, avg=2823.56, stdev=789.73 00:23:40.104 lat (usec): min=261, max=7870, avg=2828.60, stdev=791.09 00:23:40.104 clat percentiles (usec): 00:23:40.104 | 1.00th=[ 1549], 5.00th=[ 2212], 10.00th=[ 2343], 20.00th=[ 2442], 00:23:40.104 | 30.00th=[ 2507], 40.00th=[ 2573], 50.00th=[ 2606], 60.00th=[ 2671], 00:23:40.104 | 70.00th=[ 2769], 80.00th=[ 2900], 90.00th=[ 3720], 95.00th=[ 4621], 00:23:40.104 | 99.00th=[ 6194], 99.50th=[ 6652], 99.90th=[ 7504], 99.95th=[ 7635], 00:23:40.104 | 99.99th=[ 7832] 00:23:40.105 bw ( KiB/s): min=88407, max=96464, per=100.00%, avg=92146.33, stdev=4059.52, samples=3 00:23:40.105 iops : min=22101, max=24116, avg=23036.33, stdev=1015.22, samples=3 00:23:40.105 write: IOPS=22.5k, BW=87.8MiB/s (92.1MB/s)(176MiB/2001msec); 0 zone resets 00:23:40.105 slat (nsec): min=3474, max=89551, avg=5318.03, stdev=2252.40 00:23:40.105 clat (usec): min=236, max=8128, avg=2830.46, stdev=791.12 00:23:40.105 lat (usec): min=243, max=8134, avg=2835.77, stdev=792.37 00:23:40.105 clat percentiles (usec): 00:23:40.105 | 1.00th=[ 1598], 5.00th=[ 2245], 10.00th=[ 2343], 20.00th=[ 2442], 00:23:40.105 | 30.00th=[ 2507], 40.00th=[ 2573], 50.00th=[ 2606], 60.00th=[ 2671], 00:23:40.105 | 70.00th=[ 2769], 80.00th=[ 2900], 90.00th=[ 3687], 95.00th=[ 4555], 00:23:40.105 | 99.00th=[ 6194], 99.50th=[ 6718], 99.90th=[ 7635], 99.95th=[ 7701], 00:23:40.105 | 99.99th=[ 7963] 00:23:40.105 bw ( KiB/s): min=87824, max=97744, per=100.00%, avg=92261.33, stdev=5041.94, samples=3 00:23:40.105 iops : min=21956, max=24436, avg=23065.33, stdev=1260.48, samples=3 00:23:40.105 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.09% 00:23:40.105 lat (msec) : 2=2.76%, 4=89.63%, 10=7.49% 00:23:40.105 cpu : usr=99.20%, sys=0.00%, ctx=2, majf=0, minf=607 00:23:40.105 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:23:40.105 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:40.105 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:40.105 issued rwts: total=45229,44984,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:40.105 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:40.105 00:23:40.105 Run status group 0 (all jobs): 00:23:40.105 READ: bw=88.3MiB/s (92.6MB/s), 88.3MiB/s-88.3MiB/s (92.6MB/s-92.6MB/s), io=177MiB (185MB), run=2001-2001msec 00:23:40.105 WRITE: bw=87.8MiB/s (92.1MB/s), 87.8MiB/s-87.8MiB/s (92.1MB/s-92.1MB/s), io=176MiB (184MB), run=2001-2001msec 00:23:40.105 ----------------------------------------------------- 00:23:40.105 Suppressions used: 00:23:40.105 count bytes template 00:23:40.105 1 32 /usr/src/fio/parse.c 00:23:40.105 1 8 libtcmalloc_minimal.so 00:23:40.105 ----------------------------------------------------- 00:23:40.105 00:23:40.105 23:05:18 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:23:40.105 23:05:18 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:23:40.105 23:05:18 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:23:40.105 23:05:18 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:23:40.105 23:05:18 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:23:40.105 23:05:18 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:23:40.362 23:05:18 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:23:40.362 23:05:18 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:23:40.362 23:05:18 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:23:40.362 23:05:18 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:23:40.362 23:05:18 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:40.362 23:05:18 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:23:40.362 23:05:18 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:23:40.362 23:05:18 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:23:40.362 23:05:18 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:23:40.362 23:05:18 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:40.362 23:05:18 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:23:40.362 23:05:18 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:23:40.362 23:05:18 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:40.362 23:05:18 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:23:40.362 23:05:18 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:23:40.362 23:05:18 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:23:40.362 23:05:18 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:23:40.363 23:05:18 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:23:40.621 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:23:40.621 fio-3.35 00:23:40.621 Starting 1 thread 00:23:47.173 00:23:47.173 test: (groupid=0, jobs=1): err= 0: pid=64584: Mon Dec 9 23:05:24 2024 00:23:47.173 read: IOPS=21.5k, BW=84.2MiB/s (88.3MB/s)(168MiB/2001msec) 00:23:47.173 slat (nsec): min=3402, max=93101, avg=5361.07, stdev=2724.94 00:23:47.173 clat (usec): min=221, max=7521, avg=2968.18, stdev=847.86 00:23:47.173 lat (usec): min=225, max=7563, avg=2973.54, stdev=849.65 00:23:47.173 clat percentiles (usec): 00:23:47.174 | 1.00th=[ 1991], 5.00th=[ 2409], 10.00th=[ 2507], 20.00th=[ 2573], 00:23:47.174 | 30.00th=[ 2606], 40.00th=[ 2638], 50.00th=[ 2671], 60.00th=[ 2704], 00:23:47.174 | 70.00th=[ 2802], 80.00th=[ 2966], 90.00th=[ 4146], 95.00th=[ 5145], 00:23:47.174 | 99.00th=[ 6194], 99.50th=[ 6521], 99.90th=[ 6915], 99.95th=[ 6980], 00:23:47.174 | 99.99th=[ 7439] 00:23:47.174 bw ( KiB/s): min=80640, max=89672, per=98.76%, avg=85121.67, stdev=4516.39, samples=3 00:23:47.174 iops : min=20160, max=22418, avg=21280.33, stdev=1129.10, samples=3 00:23:47.174 write: IOPS=21.4k, BW=83.6MiB/s (87.6MB/s)(167MiB/2001msec); 0 zone resets 00:23:47.174 slat (usec): min=3, max=132, avg= 5.70, stdev= 2.84 00:23:47.174 clat (usec): min=229, max=7451, avg=2971.10, stdev=856.61 00:23:47.174 lat (usec): min=234, max=7466, avg=2976.80, stdev=858.45 00:23:47.174 clat percentiles (usec): 00:23:47.174 | 1.00th=[ 1958], 5.00th=[ 2409], 10.00th=[ 2507], 20.00th=[ 2573], 00:23:47.174 | 30.00th=[ 2606], 40.00th=[ 2638], 50.00th=[ 2671], 60.00th=[ 2704], 00:23:47.174 | 70.00th=[ 2802], 80.00th=[ 2966], 90.00th=[ 4178], 95.00th=[ 5145], 00:23:47.174 | 99.00th=[ 6194], 99.50th=[ 6587], 99.90th=[ 6849], 99.95th=[ 6915], 00:23:47.174 | 99.99th=[ 7308] 00:23:47.174 bw ( KiB/s): min=81024, max=90072, per=99.71%, avg=85308.67, stdev=4542.95, samples=3 00:23:47.174 iops : min=20256, max=22518, avg=21327.00, stdev=1135.76, samples=3 00:23:47.174 lat (usec) : 250=0.01%, 500=0.02%, 750=0.02%, 1000=0.02% 00:23:47.174 lat (msec) : 2=1.07%, 4=88.02%, 10=10.87% 00:23:47.174 cpu : usr=99.25%, sys=0.00%, ctx=5, majf=0, minf=608 00:23:47.174 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:23:47.174 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:47.174 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:47.174 issued rwts: total=43115,42800,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:47.174 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:47.174 00:23:47.174 Run status group 0 (all jobs): 00:23:47.174 READ: bw=84.2MiB/s (88.3MB/s), 84.2MiB/s-84.2MiB/s (88.3MB/s-88.3MB/s), io=168MiB (177MB), run=2001-2001msec 00:23:47.174 WRITE: bw=83.6MiB/s (87.6MB/s), 83.6MiB/s-83.6MiB/s (87.6MB/s-87.6MB/s), io=167MiB (175MB), run=2001-2001msec 00:23:47.174 ----------------------------------------------------- 00:23:47.174 Suppressions used: 00:23:47.174 count bytes template 00:23:47.174 1 32 /usr/src/fio/parse.c 00:23:47.174 1 8 libtcmalloc_minimal.so 00:23:47.174 ----------------------------------------------------- 00:23:47.174 00:23:47.174 23:05:24 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:23:47.174 23:05:24 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:23:47.174 23:05:24 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:23:47.174 23:05:24 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:23:47.174 23:05:24 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:23:47.174 23:05:24 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:23:47.174 23:05:25 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:23:47.174 23:05:25 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:23:47.174 23:05:25 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:23:47.174 23:05:25 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:23:47.174 23:05:25 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:47.174 23:05:25 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:23:47.174 23:05:25 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:23:47.174 23:05:25 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:23:47.174 23:05:25 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:23:47.174 23:05:25 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:47.174 23:05:25 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:23:47.174 23:05:25 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:23:47.174 23:05:25 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:47.174 23:05:25 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:23:47.174 23:05:25 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:23:47.174 23:05:25 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:23:47.174 23:05:25 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:23:47.174 23:05:25 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:23:47.174 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:23:47.174 fio-3.35 00:23:47.174 Starting 1 thread 00:23:53.737 00:23:53.737 test: (groupid=0, jobs=1): err= 0: pid=64644: Mon Dec 9 23:05:31 2024 00:23:53.737 read: IOPS=22.5k, BW=87.8MiB/s (92.1MB/s)(176MiB/2001msec) 00:23:53.737 slat (usec): min=3, max=156, avg= 5.11, stdev= 2.57 00:23:53.737 clat (usec): min=249, max=11026, avg=2843.89, stdev=771.19 00:23:53.737 lat (usec): min=254, max=11054, avg=2849.00, stdev=772.48 00:23:53.737 clat percentiles (usec): 00:23:53.737 | 1.00th=[ 2180], 5.00th=[ 2409], 10.00th=[ 2474], 20.00th=[ 2507], 00:23:53.737 | 30.00th=[ 2540], 40.00th=[ 2573], 50.00th=[ 2606], 60.00th=[ 2638], 00:23:53.737 | 70.00th=[ 2704], 80.00th=[ 2802], 90.00th=[ 3490], 95.00th=[ 4621], 00:23:53.737 | 99.00th=[ 6390], 99.50th=[ 6652], 99.90th=[ 7635], 99.95th=[ 9241], 00:23:53.737 | 99.99th=[10814] 00:23:53.737 bw ( KiB/s): min=82432, max=92824, per=98.22%, avg=88312.00, stdev=5329.35, samples=3 00:23:53.737 iops : min=20608, max=23206, avg=22078.00, stdev=1332.34, samples=3 00:23:53.737 write: IOPS=22.3k, BW=87.3MiB/s (91.5MB/s)(175MiB/2001msec); 0 zone resets 00:23:53.737 slat (usec): min=3, max=151, avg= 5.42, stdev= 2.35 00:23:53.737 clat (usec): min=212, max=10862, avg=2848.26, stdev=777.71 00:23:53.737 lat (usec): min=216, max=10872, avg=2853.67, stdev=778.95 00:23:53.737 clat percentiles (usec): 00:23:53.737 | 1.00th=[ 2180], 5.00th=[ 2409], 10.00th=[ 2474], 20.00th=[ 2507], 00:23:53.737 | 30.00th=[ 2540], 40.00th=[ 2573], 50.00th=[ 2606], 60.00th=[ 2638], 00:23:53.737 | 70.00th=[ 2704], 80.00th=[ 2835], 90.00th=[ 3523], 95.00th=[ 4621], 00:23:53.737 | 99.00th=[ 6390], 99.50th=[ 6652], 99.90th=[ 7832], 99.95th=[ 9503], 00:23:53.737 | 99.99th=[10683] 00:23:53.737 bw ( KiB/s): min=82320, max=93536, per=98.98%, avg=88461.33, stdev=5683.57, samples=3 00:23:53.737 iops : min=20580, max=23384, avg=22115.33, stdev=1420.89, samples=3 00:23:53.737 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:23:53.737 lat (msec) : 2=0.43%, 4=91.98%, 10=7.52%, 20=0.03% 00:23:53.737 cpu : usr=98.40%, sys=0.35%, ctx=17, majf=0, minf=607 00:23:53.737 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:23:53.737 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:53.737 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:53.737 issued rwts: total=44979,44708,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:53.737 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:53.737 00:23:53.737 Run status group 0 (all jobs): 00:23:53.737 READ: bw=87.8MiB/s (92.1MB/s), 87.8MiB/s-87.8MiB/s (92.1MB/s-92.1MB/s), io=176MiB (184MB), run=2001-2001msec 00:23:53.737 WRITE: bw=87.3MiB/s (91.5MB/s), 87.3MiB/s-87.3MiB/s (91.5MB/s-91.5MB/s), io=175MiB (183MB), run=2001-2001msec 00:23:53.737 ----------------------------------------------------- 00:23:53.737 Suppressions used: 00:23:53.737 count bytes template 00:23:53.737 1 32 /usr/src/fio/parse.c 00:23:53.737 1 8 libtcmalloc_minimal.so 00:23:53.737 ----------------------------------------------------- 00:23:53.737 00:23:53.737 23:05:32 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:23:53.737 23:05:32 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:23:53.737 23:05:32 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:23:53.737 23:05:32 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:23:53.994 23:05:32 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:23:53.994 23:05:32 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:23:54.261 23:05:32 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:23:54.261 23:05:32 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:23:54.261 23:05:32 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:23:54.261 23:05:32 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:23:54.261 23:05:32 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:54.261 23:05:32 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:23:54.261 23:05:32 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:23:54.261 23:05:32 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:23:54.261 23:05:32 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:23:54.261 23:05:32 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:54.261 23:05:32 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:23:54.261 23:05:32 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:54.261 23:05:32 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:23:54.261 23:05:32 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:23:54.261 23:05:32 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:23:54.261 23:05:32 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:23:54.261 23:05:32 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:23:54.261 23:05:32 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:23:54.520 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:23:54.520 fio-3.35 00:23:54.520 Starting 1 thread 00:24:06.773 00:24:06.773 test: (groupid=0, jobs=1): err= 0: pid=64706: Mon Dec 9 23:05:43 2024 00:24:06.773 read: IOPS=23.3k, BW=91.2MiB/s (95.6MB/s)(183MiB/2001msec) 00:24:06.773 slat (nsec): min=4227, max=46946, avg=5010.99, stdev=1874.34 00:24:06.773 clat (usec): min=254, max=8843, avg=2735.68, stdev=672.73 00:24:06.773 lat (usec): min=258, max=8890, avg=2740.69, stdev=673.87 00:24:06.773 clat percentiles (usec): 00:24:06.773 | 1.00th=[ 1926], 5.00th=[ 2376], 10.00th=[ 2409], 20.00th=[ 2442], 00:24:06.773 | 30.00th=[ 2474], 40.00th=[ 2507], 50.00th=[ 2540], 60.00th=[ 2573], 00:24:06.773 | 70.00th=[ 2606], 80.00th=[ 2737], 90.00th=[ 3359], 95.00th=[ 4080], 00:24:06.773 | 99.00th=[ 5866], 99.50th=[ 6325], 99.90th=[ 7504], 99.95th=[ 7701], 00:24:06.773 | 99.99th=[ 8455] 00:24:06.773 bw ( KiB/s): min=91280, max=96688, per=100.00%, avg=93720.00, stdev=2742.39, samples=3 00:24:06.773 iops : min=22820, max=24172, avg=23430.00, stdev=685.60, samples=3 00:24:06.773 write: IOPS=23.2k, BW=90.6MiB/s (95.0MB/s)(181MiB/2001msec); 0 zone resets 00:24:06.773 slat (nsec): min=4302, max=69524, avg=5308.91, stdev=2024.46 00:24:06.773 clat (usec): min=206, max=8733, avg=2740.23, stdev=689.12 00:24:06.773 lat (usec): min=211, max=8752, avg=2745.54, stdev=690.32 00:24:06.773 clat percentiles (usec): 00:24:06.773 | 1.00th=[ 1860], 5.00th=[ 2376], 10.00th=[ 2409], 20.00th=[ 2442], 00:24:06.773 | 30.00th=[ 2474], 40.00th=[ 2507], 50.00th=[ 2540], 60.00th=[ 2573], 00:24:06.773 | 70.00th=[ 2606], 80.00th=[ 2737], 90.00th=[ 3359], 95.00th=[ 4080], 00:24:06.773 | 99.00th=[ 5932], 99.50th=[ 6390], 99.90th=[ 7635], 99.95th=[ 7767], 00:24:06.773 | 99.99th=[ 8455] 00:24:06.773 bw ( KiB/s): min=92304, max=96088, per=100.00%, avg=93765.33, stdev=2033.74, samples=3 00:24:06.773 iops : min=23076, max=24022, avg=23441.33, stdev=508.43, samples=3 00:24:06.773 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.03% 00:24:06.773 lat (msec) : 2=1.21%, 4=93.21%, 10=5.52% 00:24:06.773 cpu : usr=99.25%, sys=0.05%, ctx=2, majf=0, minf=605 00:24:06.773 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:24:06.773 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:06.773 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:06.773 issued rwts: total=46723,46422,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:06.773 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:06.773 00:24:06.773 Run status group 0 (all jobs): 00:24:06.773 READ: bw=91.2MiB/s (95.6MB/s), 91.2MiB/s-91.2MiB/s (95.6MB/s-95.6MB/s), io=183MiB (191MB), run=2001-2001msec 00:24:06.773 WRITE: bw=90.6MiB/s (95.0MB/s), 90.6MiB/s-90.6MiB/s (95.0MB/s-95.0MB/s), io=181MiB (190MB), run=2001-2001msec 00:24:06.773 ----------------------------------------------------- 00:24:06.773 Suppressions used: 00:24:06.773 count bytes template 00:24:06.773 1 32 /usr/src/fio/parse.c 00:24:06.773 1 8 libtcmalloc_minimal.so 00:24:06.773 ----------------------------------------------------- 00:24:06.773 00:24:06.773 23:05:43 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:24:06.773 23:05:43 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:24:06.773 00:24:06.773 real 0m32.234s 00:24:06.773 user 0m22.432s 00:24:06.773 sys 0m16.951s 00:24:06.773 ************************************ 00:24:06.773 END TEST nvme_fio 00:24:06.773 ************************************ 00:24:06.773 23:05:43 nvme.nvme_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:06.773 23:05:43 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:24:06.773 00:24:06.773 real 1m43.651s 00:24:06.773 user 3m46.145s 00:24:06.773 sys 0m28.676s 00:24:06.773 23:05:43 nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:06.773 23:05:43 nvme -- common/autotest_common.sh@10 -- # set +x 00:24:06.773 ************************************ 00:24:06.773 END TEST nvme 00:24:06.773 ************************************ 00:24:06.773 23:05:43 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:24:06.773 23:05:43 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:24:06.773 23:05:43 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:06.773 23:05:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:06.773 23:05:43 -- common/autotest_common.sh@10 -- # set +x 00:24:06.773 ************************************ 00:24:06.773 START TEST nvme_scc 00:24:06.773 ************************************ 00:24:06.773 23:05:43 nvme_scc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:24:06.773 * Looking for test storage... 00:24:06.773 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:24:06.773 23:05:43 nvme_scc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:06.773 23:05:43 nvme_scc -- common/autotest_common.sh@1711 -- # lcov --version 00:24:06.773 23:05:43 nvme_scc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:06.773 23:05:43 nvme_scc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:06.773 23:05:43 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:06.773 23:05:43 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:06.773 23:05:43 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:06.773 23:05:43 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:24:06.773 23:05:43 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:24:06.773 23:05:43 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:24:06.773 23:05:43 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:24:06.773 23:05:43 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:24:06.773 23:05:43 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:24:06.773 23:05:43 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:24:06.773 23:05:43 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:06.773 23:05:43 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:24:06.773 23:05:43 nvme_scc -- scripts/common.sh@345 -- # : 1 00:24:06.773 23:05:43 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:06.773 23:05:43 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:06.773 23:05:43 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:24:06.773 23:05:43 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:24:06.773 23:05:43 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:06.773 23:05:43 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:24:06.773 23:05:43 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:24:06.773 23:05:43 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:24:06.773 23:05:43 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:24:06.773 23:05:43 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:06.773 23:05:43 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:24:06.773 23:05:43 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:24:06.773 23:05:43 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:06.773 23:05:43 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:06.773 23:05:43 nvme_scc -- scripts/common.sh@368 -- # return 0 00:24:06.773 23:05:43 nvme_scc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:06.773 23:05:43 nvme_scc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:06.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:06.773 --rc genhtml_branch_coverage=1 00:24:06.773 --rc genhtml_function_coverage=1 00:24:06.773 --rc genhtml_legend=1 00:24:06.773 --rc geninfo_all_blocks=1 00:24:06.773 --rc geninfo_unexecuted_blocks=1 00:24:06.773 00:24:06.773 ' 00:24:06.773 23:05:43 nvme_scc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:06.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:06.773 --rc genhtml_branch_coverage=1 00:24:06.773 --rc genhtml_function_coverage=1 00:24:06.773 --rc genhtml_legend=1 00:24:06.773 --rc geninfo_all_blocks=1 00:24:06.773 --rc geninfo_unexecuted_blocks=1 00:24:06.773 00:24:06.773 ' 00:24:06.773 23:05:43 nvme_scc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:06.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:06.773 --rc genhtml_branch_coverage=1 00:24:06.773 --rc genhtml_function_coverage=1 00:24:06.773 --rc genhtml_legend=1 00:24:06.773 --rc geninfo_all_blocks=1 00:24:06.773 --rc geninfo_unexecuted_blocks=1 00:24:06.773 00:24:06.773 ' 00:24:06.773 23:05:43 nvme_scc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:06.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:06.773 --rc genhtml_branch_coverage=1 00:24:06.773 --rc genhtml_function_coverage=1 00:24:06.773 --rc genhtml_legend=1 00:24:06.773 --rc geninfo_all_blocks=1 00:24:06.773 --rc geninfo_unexecuted_blocks=1 00:24:06.773 00:24:06.773 ' 00:24:06.773 23:05:43 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:24:06.773 23:05:43 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:24:06.773 23:05:43 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:24:06.773 23:05:43 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:24:06.773 23:05:43 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:06.773 23:05:43 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:24:06.773 23:05:43 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:06.773 23:05:43 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:06.773 23:05:43 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:06.773 23:05:43 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.773 23:05:43 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.773 23:05:43 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.773 23:05:43 nvme_scc -- paths/export.sh@5 -- # export PATH 00:24:06.774 23:05:43 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.774 23:05:43 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:24:06.774 23:05:43 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:24:06.774 23:05:43 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:24:06.774 23:05:43 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:24:06.774 23:05:43 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:24:06.774 23:05:43 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:24:06.774 23:05:43 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:24:06.774 23:05:43 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:24:06.774 23:05:43 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:24:06.774 23:05:43 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:06.774 23:05:43 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:24:06.774 23:05:43 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:24:06.774 23:05:43 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:24:06.774 23:05:43 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:24:06.774 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:06.774 Waiting for block devices as requested 00:24:06.774 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:24:06.774 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:24:06.774 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:24:06.774 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:24:10.963 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:24:10.963 23:05:49 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:24:10.963 23:05:49 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:24:10.963 23:05:49 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:24:10.963 23:05:49 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:24:10.963 23:05:49 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:24:10.963 23:05:49 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:24:10.963 23:05:49 nvme_scc -- scripts/common.sh@18 -- # local i 00:24:10.963 23:05:49 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:24:10.963 23:05:49 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:24:10.963 23:05:49 nvme_scc -- scripts/common.sh@27 -- # return 0 00:24:10.963 23:05:49 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:24:10.963 23:05:49 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:24:10.963 23:05:49 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:24:10.963 23:05:49 nvme_scc -- nvme/functions.sh@18 -- # shift 00:24:10.963 23:05:49 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:24:10.963 23:05:49 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:24:10.963 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.963 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.963 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:24:10.963 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.963 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.963 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:24:10.963 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:24:10.963 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:24:10.963 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.963 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.963 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:24:10.963 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:24:10.963 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:24:10.963 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.963 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.963 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:24:10.963 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:24:10.963 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:24:10.963 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.963 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.963 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:24:10.963 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:24:10.963 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:24:10.963 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.963 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.963 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:24:10.963 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:24:10.963 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:24:10.963 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.963 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.963 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:24:10.963 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:24:10.963 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:24:10.963 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.963 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.963 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:24:10.963 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:24:10.963 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:24:10.963 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.963 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.963 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.963 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:24:10.963 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:24:10.963 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.963 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.963 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:24:10.963 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:24:10.963 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:24:10.963 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.963 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.963 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.963 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:24:10.963 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:24:10.963 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.963 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.963 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:24:10.963 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:24:10.963 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:24:10.963 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.963 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.963 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.963 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:24:10.963 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:24:10.963 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.963 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.963 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.963 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:24:10.963 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:24:10.963 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.963 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.963 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:24:10.963 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:24:10.963 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:24:10.963 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.963 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.963 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:24:10.963 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:24:10.963 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:24:10.963 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.963 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.963 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.963 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:24:10.963 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:24:10.963 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.963 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.963 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:24:10.963 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:24:10.963 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:24:10.963 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.963 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.963 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:24:10.963 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:24:10.963 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:24:10.963 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.963 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.963 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.963 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:24:10.963 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:24:10.963 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.963 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.963 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.963 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:24:10.963 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:24:10.964 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.965 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@18 -- # shift 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:24:10.966 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.967 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.967 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.967 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:24:10.967 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:24:10.967 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.967 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.967 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:24:10.967 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:24:10.967 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:24:10.967 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.967 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.967 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:24:10.967 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:24:10.967 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:24:10.967 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.967 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.967 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:24:10.967 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:24:10.967 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:24:10.967 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.967 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.967 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.967 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:24:10.967 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:24:10.967 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.967 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.967 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.967 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:24:10.967 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:24:10.967 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.967 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.967 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.967 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:24:10.967 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:24:10.967 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.967 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.967 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.967 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:24:10.967 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:24:10.967 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.967 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.967 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.967 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:24:10.967 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:24:10.967 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.967 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.967 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:24:10.967 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:24:10.967 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:24:10.967 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.967 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.967 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:24:10.967 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:24:10.967 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:24:10.967 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.967 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.967 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:24:10.967 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:24:10.967 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:24:10.967 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.967 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.967 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:24:10.967 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:24:10.967 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:24:10.967 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.967 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.967 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:24:10.967 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:24:10.967 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:24:10.967 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.967 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.967 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:24:10.967 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:24:10.967 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:24:10.967 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.967 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.967 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:24:10.967 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:24:10.967 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:24:10.967 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.967 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.967 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:24:10.967 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:24:10.967 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:24:10.967 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.967 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.967 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:24:10.967 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:24:10.967 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:24:10.967 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.967 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.967 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:24:10.967 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:24:10.967 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:24:10.967 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.967 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.967 23:05:49 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:24:10.967 23:05:49 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:24:10.967 23:05:49 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:24:10.967 23:05:49 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:24:10.967 23:05:49 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:24:10.967 23:05:49 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:24:10.967 23:05:49 nvme_scc -- nvme/functions.sh@18 -- # shift 00:24:10.967 23:05:49 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:24:10.967 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.967 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.967 23:05:49 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:24:10.967 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:24:10.967 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.967 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.967 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:24:10.967 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:24:10.967 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:24:10.967 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.967 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.967 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:24:10.967 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:24:10.967 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:24:10.967 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.967 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.967 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:24:10.967 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:24:10.967 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:24:10.967 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.967 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:24:10.968 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.969 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.969 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:24:10.969 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:24:10.969 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:24:10.969 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.969 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.969 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:24:10.969 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:24:10.969 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:24:10.969 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.969 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.969 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:24:10.969 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:24:10.969 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:24:10.969 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.969 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.969 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:24:10.969 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:24:10.969 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:24:10.969 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.969 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.969 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:24:10.969 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:24:10.969 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:24:10.969 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.969 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.969 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:24:10.969 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:24:10.969 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:24:10.969 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.969 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.969 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:24:10.969 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:24:10.969 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:24:10.969 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.969 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.969 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:24:10.969 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:24:10.969 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:24:10.969 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.969 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.969 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:24:10.969 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:24:10.969 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:24:10.969 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.969 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.969 23:05:49 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:24:10.969 23:05:49 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:24:10.969 23:05:49 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:24:10.969 23:05:49 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:24:10.969 23:05:49 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:24:10.969 23:05:49 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:24:10.969 23:05:49 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:24:10.969 23:05:49 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:24:10.969 23:05:49 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:24:10.969 23:05:49 nvme_scc -- scripts/common.sh@18 -- # local i 00:24:10.969 23:05:49 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:24:10.969 23:05:49 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:24:10.969 23:05:49 nvme_scc -- scripts/common.sh@27 -- # return 0 00:24:10.969 23:05:49 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:24:10.969 23:05:49 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:24:10.969 23:05:49 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:24:10.969 23:05:49 nvme_scc -- nvme/functions.sh@18 -- # shift 00:24:10.969 23:05:49 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:24:10.969 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.969 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.969 23:05:49 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:24:10.969 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:24:10.969 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.969 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.969 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:24:10.969 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:24:10.969 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:24:10.969 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.969 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.969 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:24:10.969 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:24:10.969 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:24:10.969 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.969 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.969 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:24:10.969 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:24:10.969 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:24:10.969 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.969 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.969 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:24:10.969 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:24:10.969 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:24:10.969 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.969 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.969 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:24:10.969 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:24:10.969 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:24:10.969 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.969 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.969 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:24:10.969 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:24:10.969 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:24:10.969 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.969 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.969 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:24:10.969 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:24:10.969 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:24:10.969 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.969 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.969 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.969 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:24:10.969 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:24:10.969 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.969 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.969 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:24:10.969 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:24:10.969 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:24:10.969 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.969 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.969 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.969 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:24:10.969 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:24:10.969 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.969 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.969 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:24:10.969 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:24:10.969 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:24:10.969 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.969 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.969 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.969 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:24:10.969 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:24:10.969 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.969 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.969 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.970 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.971 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:24:10.972 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:24:10.972 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:24:10.972 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.972 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.972 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.972 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:24:10.972 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:24:10.972 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.972 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.972 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.972 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:24:10.972 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:24:10.972 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.972 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.972 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.972 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:24:10.972 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:24:10.972 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.972 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.972 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:24:10.972 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:24:10.972 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:24:10.972 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.972 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.972 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.972 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:24:10.972 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:24:10.972 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.972 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.972 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.972 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:24:10.972 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:24:10.972 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.972 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.972 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.972 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:24:10.972 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:24:10.972 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.972 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.972 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.972 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:24:10.972 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:24:10.972 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.972 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.972 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.972 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:24:10.972 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:24:10.972 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.972 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.972 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.972 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:24:10.972 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:24:10.972 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.972 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.972 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:24:10.972 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:24:10.972 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:24:10.972 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.972 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.972 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:24:10.972 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:24:10.972 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:24:10.972 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.972 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.972 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:24:10.972 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:24:10.972 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:24:10.972 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.972 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.972 23:05:49 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:24:10.972 23:05:49 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:24:10.972 23:05:49 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:24:10.972 23:05:49 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:24:10.972 23:05:49 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:24:10.972 23:05:49 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:24:10.972 23:05:49 nvme_scc -- nvme/functions.sh@18 -- # shift 00:24:10.972 23:05:49 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:24:10.972 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.972 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.972 23:05:49 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:24:10.972 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:24:10.972 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.972 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.972 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:24:10.972 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:24:10.972 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:24:10.972 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.972 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.972 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:24:10.972 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:24:10.972 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:24:10.972 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.972 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.972 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:24:10.972 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:24:10.972 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:24:10.972 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.972 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.972 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:24:10.972 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:24:10.972 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:24:10.972 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.972 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.972 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:24:10.972 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:24:10.972 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:24:10.972 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.972 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.972 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:24:10.972 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:24:10.972 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:24:10.972 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.972 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.972 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:24:10.972 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:24:10.972 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:24:10.972 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.972 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.972 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:24:10.972 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:24:10.972 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:24:10.972 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.972 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.972 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.972 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:24:10.972 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:24:10.972 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.972 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.972 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.972 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:24:10.972 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:24:10.972 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.972 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.972 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.973 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:24:10.973 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:24:10.973 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.973 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.973 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.973 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:24:10.973 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:24:10.973 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.973 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.973 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:24:10.973 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:24:10.973 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:24:10.973 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.973 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.973 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.973 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:24:10.973 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:24:10.973 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.973 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.973 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.973 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:24:10.973 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:24:10.973 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.973 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.973 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.973 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:24:10.973 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:24:10.973 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.973 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.973 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.973 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:24:10.973 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:24:10.973 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.973 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.973 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.973 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:24:10.973 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:24:10.973 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.973 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.973 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.973 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:24:10.973 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:24:10.973 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.973 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.973 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.973 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:24:10.973 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:24:10.973 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.973 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.973 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.973 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:24:10.973 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:24:10.973 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.973 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.973 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.973 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:24:10.973 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:24:10.973 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.973 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.973 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.973 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:24:10.973 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:24:10.973 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.973 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.973 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.973 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:24:10.973 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:24:10.973 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.973 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.973 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.973 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:24:10.973 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:24:10.973 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.973 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.973 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.973 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:24:10.973 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:24:10.973 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.973 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.973 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:24:10.973 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:24:10.973 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:24:10.973 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.973 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.973 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:24:10.973 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:24:10.973 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:24:10.973 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.973 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.973 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:24:10.973 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:24:10.973 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:24:10.973 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.973 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.973 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.973 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:24:10.973 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:24:10.973 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.973 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.973 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.973 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:24:10.973 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:24:10.973 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.973 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.973 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.973 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:24:10.973 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:24:10.973 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.973 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.973 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.973 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:24:10.973 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:24:10.973 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.973 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.973 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.973 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:24:10.973 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:24:10.973 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.973 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.973 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:24:10.973 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:24:10.973 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:24:10.973 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.973 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.973 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:24:10.973 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:24:10.973 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:24:10.973 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.973 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.973 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:24:10.973 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:24:10.973 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:24:10.974 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.974 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.974 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:24:10.974 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:24:10.974 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:24:10.974 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.974 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.974 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:24:10.974 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:24:10.974 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:24:10.974 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.974 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.974 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:24:10.974 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:24:10.974 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:24:10.974 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.974 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.974 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:24:10.974 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:24:10.974 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:24:10.974 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.974 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.974 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:24:10.974 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:24:10.974 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:24:10.974 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.974 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.974 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:24:10.974 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:24:10.974 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:24:10.974 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.974 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.974 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:24:10.974 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:24:10.974 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:24:10.974 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.974 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.974 23:05:49 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:24:10.974 23:05:49 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:24:10.974 23:05:49 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:24:10.974 23:05:49 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:24:10.974 23:05:49 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:24:10.974 23:05:49 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:24:10.974 23:05:49 nvme_scc -- nvme/functions.sh@18 -- # shift 00:24:10.974 23:05:49 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:24:10.974 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.974 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.974 23:05:49 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:24:10.974 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:24:10.974 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.974 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.974 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:24:10.974 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:24:10.974 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:24:10.974 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.974 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.974 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:24:10.974 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:24:10.974 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:24:10.974 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.974 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.974 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:24:10.974 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:24:10.974 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:24:10.974 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.974 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.974 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:24:10.974 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:24:10.974 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:24:10.974 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.974 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.974 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:24:10.974 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:24:10.974 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:24:10.974 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.974 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.974 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:24:10.974 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:24:10.974 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:24:10.974 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.974 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.974 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:24:10.974 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:24:10.974 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:24:10.974 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.974 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.974 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:24:10.974 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:24:10.974 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:24:10.974 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.974 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.974 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.974 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:24:10.974 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:24:10.974 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.974 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.974 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.974 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:24:10.974 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:24:10.974 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.974 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.974 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.974 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:24:10.974 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:24:10.974 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.974 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.974 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.974 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:24:10.974 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:24:10.974 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.975 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.975 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:24:10.975 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:24:10.975 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:24:10.975 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.975 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.975 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.975 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:24:10.975 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:24:10.975 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.975 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.975 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.975 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:24:10.975 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:24:10.975 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.975 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.975 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.975 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:24:10.975 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:24:10.975 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.975 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.975 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.975 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:24:10.975 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:24:10.975 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.975 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.975 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.975 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:24:10.975 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:24:10.975 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.975 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.975 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.975 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:24:10.975 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:24:10.975 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.975 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.975 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.975 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:24:10.975 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:24:10.975 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.975 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.975 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.975 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:24:10.975 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:24:10.975 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.975 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.975 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.975 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:24:10.975 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:24:10.975 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.975 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.975 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.975 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:24:10.975 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:24:10.975 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.975 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.975 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.975 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:24:10.975 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:24:10.975 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.975 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.975 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.975 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:24:10.975 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:24:10.975 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.975 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.975 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.975 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:24:10.975 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:24:10.975 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.975 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.975 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:24:10.975 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:24:10.975 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:24:10.975 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.975 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.975 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:24:10.975 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:24:10.975 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:24:10.975 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.975 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.975 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:24:10.975 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:24:10.975 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:24:10.975 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.975 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.975 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.975 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:24:10.975 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:24:10.975 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.975 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.975 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.975 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:24:10.975 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:24:10.975 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.975 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.975 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.975 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:24:10.975 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:24:10.975 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.975 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.975 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.975 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:24:10.975 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:24:10.975 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.975 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.975 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.975 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:24:10.975 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:24:10.975 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.975 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.975 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:24:10.975 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:24:10.975 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:24:10.975 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.975 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.975 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:24:10.975 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:24:10.975 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:24:10.975 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.975 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.975 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:24:10.975 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:24:10.975 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:24:10.975 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.975 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.975 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:24:10.975 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:24:10.975 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:24:10.975 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.975 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.975 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:24:10.975 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:24:10.975 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:24:10.975 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.975 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.975 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:24:10.975 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:24:10.976 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:24:10.976 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.976 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.976 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:24:10.976 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:24:10.976 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:24:10.976 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.976 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.976 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:24:10.976 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:24:10.976 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:24:10.976 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.976 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.976 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:24:10.976 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:24:10.976 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:24:10.976 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.976 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.976 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:24:10.976 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:24:10.976 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:24:10.976 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.976 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.976 23:05:49 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:24:10.976 23:05:49 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:24:10.976 23:05:49 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:24:10.976 23:05:49 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:24:10.976 23:05:49 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:24:10.976 23:05:49 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:24:10.976 23:05:49 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:24:10.976 23:05:49 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:24:10.976 23:05:49 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:24:10.976 23:05:49 nvme_scc -- scripts/common.sh@18 -- # local i 00:24:10.976 23:05:49 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:24:10.976 23:05:49 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:24:10.976 23:05:49 nvme_scc -- scripts/common.sh@27 -- # return 0 00:24:10.976 23:05:49 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:24:10.976 23:05:49 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:24:10.976 23:05:49 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:24:10.976 23:05:49 nvme_scc -- nvme/functions.sh@18 -- # shift 00:24:10.976 23:05:49 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:24:10.976 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.976 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.976 23:05:49 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:24:10.976 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:24:10.976 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.976 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.976 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:24:10.976 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:24:10.976 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:24:10.976 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.976 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.976 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:24:10.976 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:24:10.976 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:24:10.976 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.976 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.976 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:24:10.976 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:24:10.976 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:24:10.976 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.976 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.976 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:24:10.976 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:24:10.976 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:24:10.976 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.976 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.976 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:24:10.976 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:24:10.976 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:24:10.976 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.976 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.976 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:24:10.976 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:24:10.976 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:24:10.976 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.976 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.976 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:24:10.976 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:24:10.976 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:24:10.976 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.976 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.976 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.976 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:24:10.976 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:24:10.976 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.976 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.976 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:24:10.976 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:24:10.976 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:24:10.976 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.976 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.976 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.976 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:24:10.976 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:24:10.976 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.976 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.976 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:24:10.976 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:24:10.976 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:24:10.976 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.976 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.976 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.976 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:24:10.976 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:24:10.976 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.976 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.976 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.976 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:24:10.976 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:24:10.976 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.976 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.976 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:24:10.976 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:24:10.976 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:24:10.976 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.976 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.976 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:24:10.976 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.977 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:24:10.978 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:24:10.978 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.978 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.978 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.978 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:24:10.978 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:24:10.978 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.978 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.978 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.978 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:24:10.978 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:24:10.978 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.978 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.978 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.978 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:24:10.978 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:24:10.978 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.978 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.978 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.978 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:24:10.978 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:24:10.978 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.978 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.978 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.978 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:24:10.978 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:24:10.978 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.978 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.978 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.978 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:24:10.978 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:24:10.978 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.978 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.978 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.978 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:24:10.978 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:24:10.978 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.978 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.978 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.978 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:24:10.978 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:24:10.978 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.978 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.978 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.978 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:24:10.978 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:24:10.978 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.978 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.978 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.978 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:24:10.978 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:24:10.978 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.978 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.978 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.978 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:24:10.978 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:24:10.978 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.978 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.978 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.978 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:24:10.978 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:24:10.978 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.978 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.978 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.978 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:24:10.978 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:24:10.978 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.978 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.978 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:24:10.978 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:24:10.978 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:24:10.978 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.978 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.978 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:24:10.978 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:24:10.978 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:24:10.978 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.978 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.978 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.978 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:24:10.978 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:24:10.978 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.978 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.978 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:24:10.978 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:24:10.978 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:24:10.978 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.978 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.978 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:24:10.978 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:24:10.978 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:24:10.978 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.978 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.978 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.978 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:24:10.978 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:24:10.978 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.978 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.978 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.978 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:24:10.978 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:24:10.978 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.978 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.978 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:24:10.978 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:24:10.978 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:24:10.978 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.978 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.978 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.978 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:24:10.978 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:24:10.978 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.978 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.978 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.978 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@18 -- # shift 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:24:10.979 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:24:10.980 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.980 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.980 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.980 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:24:10.980 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:24:10.980 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.980 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.980 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.980 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:24:10.980 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:24:10.980 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.980 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.980 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:24:10.980 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:24:10.980 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:24:10.980 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.980 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.980 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.980 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:24:10.980 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:24:10.980 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.980 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.980 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.980 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:24:10.980 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:24:10.980 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.980 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.980 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.980 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:24:10.980 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:24:10.980 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.980 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.980 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.980 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:24:10.980 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:24:10.980 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.980 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.980 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.980 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:24:10.980 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:24:10.980 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.980 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.980 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.980 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:24:10.980 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:24:10.980 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.980 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.980 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.980 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:24:10.980 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:24:10.980 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.980 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.980 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.980 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:24:10.980 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:24:10.980 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.980 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.980 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.980 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:24:10.980 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:24:10.980 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.980 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.980 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.980 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:24:10.980 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:24:10.980 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.980 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.980 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.980 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:24:10.980 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:24:10.980 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.980 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.980 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.980 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:24:10.980 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:24:10.980 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.980 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.980 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.980 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:24:10.980 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:24:10.980 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.980 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.980 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:24:10.980 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:24:10.980 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:24:10.980 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.980 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.980 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:24:10.980 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:24:10.980 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:24:10.980 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.980 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.980 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:24:10.980 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:24:10.980 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:24:10.980 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.980 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.980 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.980 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:24:10.980 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:24:10.980 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.980 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.980 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.980 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:24:10.980 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:24:10.980 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.980 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.980 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.980 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:24:10.980 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:24:10.980 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.980 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.980 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.980 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:24:10.980 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:24:10.980 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.980 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.980 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.980 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:24:10.980 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:24:10.980 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.980 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.980 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:24:10.980 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:24:10.980 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:24:10.980 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.980 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.980 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:24:10.980 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:24:10.980 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:24:10.980 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.980 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.980 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:24:10.980 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:24:10.980 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:24:10.980 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.980 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.980 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:24:10.981 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:24:10.981 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:24:10.981 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.981 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.981 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:24:10.981 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:24:10.981 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:24:10.981 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.981 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.981 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:24:10.981 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:24:10.981 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:24:10.981 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.981 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.981 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:24:10.981 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:24:10.981 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:24:10.981 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.981 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.981 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:24:10.981 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:24:10.981 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:24:10.981 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.981 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.981 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:24:10.981 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:24:10.981 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:24:10.981 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.981 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.981 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:24:10.981 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:24:10.981 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:24:10.981 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.981 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.981 23:05:49 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:24:10.981 23:05:49 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:24:10.981 23:05:49 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:24:10.981 23:05:49 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:24:10.981 23:05:49 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:24:10.981 23:05:49 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:24:10.981 23:05:49 nvme_scc -- nvme/functions.sh@18 -- # shift 00:24:10.981 23:05:49 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:24:10.981 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.981 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.981 23:05:49 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:24:10.981 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:24:10.981 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.981 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.981 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:24:10.981 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:24:10.981 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:24:10.981 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.981 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.981 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:24:10.981 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:24:10.981 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:24:10.981 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.981 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.981 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:24:10.981 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:24:10.981 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:24:10.981 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.981 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.981 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:24:10.981 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:24:10.981 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:24:10.981 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.981 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.981 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:24:10.981 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:24:10.981 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:24:10.981 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.981 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.981 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:24:10.981 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:24:10.981 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:24:10.981 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.981 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.981 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:24:10.981 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:24:10.981 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:24:10.981 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.981 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.981 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:24:10.981 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:24:10.981 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:24:10.981 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.981 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.981 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.981 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:24:10.981 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:24:10.981 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.981 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.981 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.981 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:24:10.981 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:24:10.981 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.981 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.981 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.981 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:24:10.981 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:24:10.981 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.981 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.981 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.981 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:24:10.981 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:24:10.981 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.981 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.981 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:24:10.981 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:24:10.981 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:24:10.981 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.981 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.981 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.981 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:24:10.981 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:24:10.981 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.981 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.981 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.981 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:24:10.981 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:24:10.981 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.981 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.981 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.981 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:24:10.981 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:24:10.981 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.981 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.981 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.981 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:24:10.981 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:24:10.981 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.981 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.981 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.981 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:24:10.981 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:24:10.981 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.981 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.981 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.981 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:24:10.981 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:24:10.981 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:10.981 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:10.981 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:10.981 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:24:11.244 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:24:11.244 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.244 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.244 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.244 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:24:11.244 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:24:11.244 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.244 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.244 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.244 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:24:11.244 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:24:11.244 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.244 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.244 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.244 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:24:11.244 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:24:11.244 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.244 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.244 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.244 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:24:11.244 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:24:11.244 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.244 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.244 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.244 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:24:11.244 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:24:11.244 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.244 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.244 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.244 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:24:11.244 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:24:11.244 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.244 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.244 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:24:11.244 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:24:11.244 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:24:11.244 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.244 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.244 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:24:11.244 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:24:11.244 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:24:11.244 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.244 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.244 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:24:11.244 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:24:11.244 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:24:11.244 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.244 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.244 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.244 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:24:11.244 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:24:11.244 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.244 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.244 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.244 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:24:11.244 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:24:11.244 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.244 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.244 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.244 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:24:11.244 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:24:11.244 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.244 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.244 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.244 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:24:11.244 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:24:11.244 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.244 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.244 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.244 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:24:11.244 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@18 -- # shift 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:24:11.245 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@18 -- # shift 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:24:11.246 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:24:11.247 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@18 -- # shift 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.248 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:24:11.249 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:24:11.249 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.249 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.249 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.249 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:24:11.249 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:24:11.249 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.249 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.249 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.249 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:24:11.249 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:24:11.249 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.249 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.249 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.249 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:24:11.249 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:24:11.249 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.249 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.249 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.249 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:24:11.249 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:24:11.249 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.249 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.249 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.249 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:24:11.249 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:24:11.249 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.249 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.249 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.249 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:24:11.249 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:24:11.249 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.249 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.249 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.249 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:24:11.249 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:24:11.249 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.249 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.249 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.249 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:24:11.249 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:24:11.249 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.249 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.249 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.249 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:24:11.249 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:24:11.249 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.249 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.249 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:24:11.249 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:24:11.249 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:24:11.249 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.249 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.249 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:24:11.249 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:24:11.249 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:24:11.249 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.249 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.249 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:24:11.249 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:24:11.249 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:24:11.249 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.249 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.249 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.249 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:24:11.249 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:24:11.249 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.249 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.249 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.249 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:24:11.249 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:24:11.249 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.249 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.249 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.249 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:24:11.249 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:24:11.249 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.249 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.249 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.249 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:24:11.249 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:24:11.249 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.249 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.249 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.249 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:24:11.249 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:24:11.249 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.249 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.249 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:24:11.249 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:24:11.249 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:24:11.249 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.249 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.249 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:24:11.249 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:24:11.249 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:24:11.249 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.249 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.249 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:24:11.249 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:24:11.249 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:24:11.249 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.249 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.249 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:24:11.249 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:24:11.249 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:24:11.249 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.249 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.249 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:24:11.249 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:24:11.249 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:24:11.249 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.249 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.249 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:24:11.249 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:24:11.249 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:24:11.249 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.249 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.249 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:24:11.249 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:24:11.249 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:24:11.249 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.249 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.249 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:24:11.249 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:24:11.249 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:24:11.249 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.249 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@18 -- # shift 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.250 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.251 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:24:11.251 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:24:11.251 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:24:11.251 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.251 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.251 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:24:11.251 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:24:11.251 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:24:11.251 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.251 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.251 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.251 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:24:11.251 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:24:11.251 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.251 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.251 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.251 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:24:11.251 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:24:11.251 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.251 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.251 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.251 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:24:11.251 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:24:11.251 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.251 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.251 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.251 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:24:11.251 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:24:11.251 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.251 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.251 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.251 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:24:11.251 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:24:11.251 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.251 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.251 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:24:11.251 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:24:11.251 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:24:11.251 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.251 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.251 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:24:11.251 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:24:11.251 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:24:11.251 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.251 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.251 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:24:11.251 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:24:11.251 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:24:11.251 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.251 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.251 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:24:11.251 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:24:11.251 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:24:11.251 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.251 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.251 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:24:11.251 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:24:11.251 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:24:11.251 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.251 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.251 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:24:11.251 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:24:11.251 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:24:11.251 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.251 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.251 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:24:11.251 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:24:11.251 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:24:11.251 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.251 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.251 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:24:11.251 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:24:11.251 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:24:11.251 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.251 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.251 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:24:11.251 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:24:11.251 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:24:11.251 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.251 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.251 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:24:11.251 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:24:11.251 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:24:11.251 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.251 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.251 23:05:49 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:24:11.251 23:05:49 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:24:11.251 23:05:49 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:24:11.251 23:05:49 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:24:11.251 23:05:49 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:24:11.251 23:05:49 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:24:11.251 23:05:49 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:24:11.251 23:05:49 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:24:11.251 23:05:49 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:24:11.251 23:05:49 nvme_scc -- scripts/common.sh@18 -- # local i 00:24:11.251 23:05:49 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:24:11.251 23:05:49 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:24:11.251 23:05:49 nvme_scc -- scripts/common.sh@27 -- # return 0 00:24:11.251 23:05:49 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:24:11.251 23:05:49 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:24:11.251 23:05:49 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:24:11.251 23:05:49 nvme_scc -- nvme/functions.sh@18 -- # shift 00:24:11.251 23:05:49 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:24:11.251 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.251 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.251 23:05:49 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:24:11.251 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:24:11.251 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.251 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.251 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:24:11.251 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:24:11.251 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:24:11.251 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.251 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.251 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:24:11.251 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:24:11.251 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:24:11.251 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.251 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.251 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:24:11.251 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:24:11.251 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:24:11.251 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.251 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.251 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:24:11.251 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:24:11.251 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:24:11.251 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.251 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.251 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:24:11.251 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:24:11.251 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:24:11.251 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.251 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.251 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:24:11.251 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:24:11.251 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:24:11.251 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.251 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.251 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:24:11.251 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:24:11.251 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:24:11.251 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.251 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.251 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:24:11.251 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:24:11.251 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:24:11.251 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.251 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.251 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.252 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:24:11.253 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:24:11.253 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.253 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.253 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.253 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:24:11.253 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:24:11.253 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.253 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.253 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.253 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:24:11.253 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:24:11.253 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.253 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.253 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.253 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:24:11.253 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:24:11.253 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.253 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.253 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.253 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:24:11.253 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:24:11.253 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.253 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.253 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.253 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:24:11.253 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:24:11.253 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.253 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.253 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.253 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:24:11.253 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:24:11.253 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.253 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.253 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.253 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:24:11.253 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:24:11.253 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.253 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.253 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.253 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:24:11.253 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:24:11.253 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.253 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.253 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.253 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:24:11.253 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:24:11.253 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.253 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.253 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.253 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:24:11.253 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:24:11.253 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.253 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.253 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.253 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:24:11.253 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:24:11.253 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.253 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.253 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:24:11.253 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:24:11.253 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:24:11.253 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.253 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.253 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.253 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:24:11.253 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:24:11.253 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.253 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.253 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.253 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:24:11.253 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:24:11.253 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.253 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.253 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.253 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:24:11.253 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:24:11.253 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.253 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.253 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.253 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:24:11.253 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:24:11.253 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.253 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.253 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.253 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:24:11.253 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:24:11.253 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.253 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.253 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.253 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:24:11.253 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:24:11.253 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.253 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.253 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.253 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:24:11.253 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:24:11.253 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.253 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.253 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:24:11.253 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:24:11.253 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:24:11.253 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.253 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.253 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:24:11.253 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:24:11.253 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:24:11.253 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.253 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.253 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.253 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:24:11.253 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:24:11.253 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.253 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.253 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:24:11.253 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:24:11.253 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:24:11.253 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.253 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.253 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:24:11.253 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:24:11.253 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:24:11.253 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.253 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.253 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.253 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:24:11.253 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:24:11.253 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.253 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.253 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.253 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:24:11.253 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:24:11.253 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.253 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.253 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:24:11.253 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:24:11.253 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:24:11.253 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.253 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.253 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.253 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:24:11.253 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:24:11.253 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.253 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.253 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:24:11.254 23:05:49 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:24:11.254 23:05:49 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:24:11.255 23:05:49 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:24:11.255 23:05:49 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:24:11.255 23:05:49 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:24:11.255 23:05:49 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:24:11.255 23:05:49 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:24:11.255 23:05:49 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:24:11.255 23:05:49 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:24:11.255 23:05:49 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:24:11.255 23:05:49 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:24:11.255 23:05:49 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:24:11.255 23:05:49 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:24:11.255 23:05:49 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:24:11.255 23:05:49 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:24:11.255 23:05:49 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:24:11.255 23:05:49 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:24:11.255 23:05:49 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:24:11.255 23:05:49 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:24:11.512 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:12.076 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:24:12.076 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:24:12.076 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:24:12.076 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:24:12.076 23:05:50 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:24:12.076 23:05:50 nvme_scc -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:24:12.076 23:05:50 nvme_scc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:12.076 23:05:50 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:24:12.334 ************************************ 00:24:12.334 START TEST nvme_simple_copy 00:24:12.334 ************************************ 00:24:12.334 23:05:50 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:24:12.592 Initializing NVMe Controllers 00:24:12.592 Attaching to 0000:00:10.0 00:24:12.592 Controller supports SCC. Attached to 0000:00:10.0 00:24:12.592 Namespace ID: 1 size: 6GB 00:24:12.592 Initialization complete. 00:24:12.592 00:24:12.592 Controller QEMU NVMe Ctrl (12340 ) 00:24:12.592 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:24:12.592 Namespace Block Size:4096 00:24:12.592 Writing LBAs 0 to 63 with Random Data 00:24:12.592 Copied LBAs from 0 - 63 to the Destination LBA 256 00:24:12.592 LBAs matching Written Data: 64 00:24:12.592 00:24:12.592 real 0m0.272s 00:24:12.592 user 0m0.100s 00:24:12.592 sys 0m0.069s 00:24:12.592 23:05:50 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:12.592 23:05:50 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:24:12.592 ************************************ 00:24:12.592 END TEST nvme_simple_copy 00:24:12.592 ************************************ 00:24:12.592 00:24:12.592 real 0m7.513s 00:24:12.592 user 0m1.080s 00:24:12.592 sys 0m1.356s 00:24:12.592 23:05:50 nvme_scc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:12.592 23:05:50 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:24:12.592 ************************************ 00:24:12.592 END TEST nvme_scc 00:24:12.592 ************************************ 00:24:12.592 23:05:50 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:24:12.592 23:05:50 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:24:12.592 23:05:50 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:24:12.592 23:05:50 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:24:12.592 23:05:50 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:24:12.592 23:05:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:12.592 23:05:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:12.592 23:05:50 -- common/autotest_common.sh@10 -- # set +x 00:24:12.592 ************************************ 00:24:12.592 START TEST nvme_fdp 00:24:12.592 ************************************ 00:24:12.592 23:05:50 nvme_fdp -- common/autotest_common.sh@1129 -- # test/nvme/nvme_fdp.sh 00:24:12.592 * Looking for test storage... 00:24:12.592 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:24:12.592 23:05:50 nvme_fdp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:12.592 23:05:50 nvme_fdp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:12.592 23:05:50 nvme_fdp -- common/autotest_common.sh@1711 -- # lcov --version 00:24:12.592 23:05:51 nvme_fdp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:12.592 23:05:51 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:12.592 23:05:51 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:12.592 23:05:51 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:12.592 23:05:51 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:24:12.592 23:05:51 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:24:12.592 23:05:51 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:24:12.592 23:05:51 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:24:12.592 23:05:51 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:24:12.592 23:05:51 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:24:12.592 23:05:51 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:24:12.592 23:05:51 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:12.592 23:05:51 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:24:12.592 23:05:51 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:24:12.592 23:05:51 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:12.592 23:05:51 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:12.592 23:05:51 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:24:12.592 23:05:51 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:24:12.592 23:05:51 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:12.592 23:05:51 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:24:12.592 23:05:51 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:24:12.592 23:05:51 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:24:12.592 23:05:51 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:24:12.592 23:05:51 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:12.592 23:05:51 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:24:12.592 23:05:51 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:24:12.592 23:05:51 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:12.592 23:05:51 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:12.592 23:05:51 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:24:12.592 23:05:51 nvme_fdp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:12.592 23:05:51 nvme_fdp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:12.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:12.592 --rc genhtml_branch_coverage=1 00:24:12.592 --rc genhtml_function_coverage=1 00:24:12.592 --rc genhtml_legend=1 00:24:12.592 --rc geninfo_all_blocks=1 00:24:12.592 --rc geninfo_unexecuted_blocks=1 00:24:12.592 00:24:12.592 ' 00:24:12.592 23:05:51 nvme_fdp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:12.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:12.592 --rc genhtml_branch_coverage=1 00:24:12.592 --rc genhtml_function_coverage=1 00:24:12.592 --rc genhtml_legend=1 00:24:12.592 --rc geninfo_all_blocks=1 00:24:12.592 --rc geninfo_unexecuted_blocks=1 00:24:12.592 00:24:12.592 ' 00:24:12.592 23:05:51 nvme_fdp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:12.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:12.592 --rc genhtml_branch_coverage=1 00:24:12.592 --rc genhtml_function_coverage=1 00:24:12.592 --rc genhtml_legend=1 00:24:12.592 --rc geninfo_all_blocks=1 00:24:12.592 --rc geninfo_unexecuted_blocks=1 00:24:12.592 00:24:12.592 ' 00:24:12.592 23:05:51 nvme_fdp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:12.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:12.592 --rc genhtml_branch_coverage=1 00:24:12.592 --rc genhtml_function_coverage=1 00:24:12.593 --rc genhtml_legend=1 00:24:12.593 --rc geninfo_all_blocks=1 00:24:12.593 --rc geninfo_unexecuted_blocks=1 00:24:12.593 00:24:12.593 ' 00:24:12.593 23:05:51 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:24:12.593 23:05:51 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:24:12.593 23:05:51 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:24:12.593 23:05:51 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:24:12.593 23:05:51 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:12.593 23:05:51 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:24:12.593 23:05:51 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:12.593 23:05:51 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:12.593 23:05:51 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:12.593 23:05:51 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:12.593 23:05:51 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:12.593 23:05:51 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:12.593 23:05:51 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:24:12.593 23:05:51 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:12.593 23:05:51 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:24:12.593 23:05:51 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:24:12.593 23:05:51 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:24:12.593 23:05:51 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:24:12.593 23:05:51 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:24:12.593 23:05:51 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:24:12.593 23:05:51 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:24:12.593 23:05:51 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:24:12.593 23:05:51 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:24:12.593 23:05:51 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:12.593 23:05:51 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:24:12.849 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:13.106 Waiting for block devices as requested 00:24:13.106 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:24:13.106 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:24:13.362 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:24:13.362 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:24:18.727 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:24:18.727 23:05:56 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:24:18.727 23:05:56 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:24:18.727 23:05:56 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:24:18.727 23:05:56 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:24:18.727 23:05:56 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:24:18.727 23:05:56 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:24:18.727 23:05:56 nvme_fdp -- scripts/common.sh@18 -- # local i 00:24:18.727 23:05:56 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:24:18.727 23:05:56 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:24:18.727 23:05:56 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:24:18.727 23:05:56 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:24:18.727 23:05:56 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:24:18.727 23:05:56 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:24:18.727 23:05:56 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:24:18.727 23:05:56 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:24:18.727 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.727 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.727 23:05:56 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:24:18.727 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:24:18.727 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.727 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.727 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:24:18.727 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:24:18.727 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:24:18.727 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.727 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.727 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:24:18.727 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:24:18.727 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:24:18.727 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.727 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.727 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:24:18.727 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:24:18.727 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:24:18.727 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.727 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.727 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:24:18.727 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:24:18.727 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:24:18.727 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.727 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.727 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:24:18.727 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:24:18.727 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:24:18.727 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.727 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.727 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:24:18.727 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:24:18.727 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:24:18.727 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.727 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.727 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:24:18.727 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:24:18.727 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:24:18.727 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.727 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.727 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.727 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:24:18.727 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:24:18.727 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.727 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.727 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:24:18.727 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:24:18.727 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:24:18.727 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.727 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.727 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.727 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:24:18.727 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:24:18.727 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.727 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.727 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:24:18.727 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:24:18.727 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:24:18.727 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.727 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.727 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.727 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:24:18.727 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:24:18.727 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.727 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.727 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.727 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:24:18.727 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:24:18.727 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.727 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.727 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:24:18.727 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:24:18.727 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:24:18.727 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.727 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.727 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:24:18.727 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:24:18.727 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:24:18.727 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.727 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.727 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.727 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:24:18.727 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:24:18.727 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.727 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.727 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:24:18.727 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:24:18.727 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:24:18.727 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.727 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.727 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:24:18.727 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:24:18.727 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:24:18.727 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.727 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.727 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.727 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:24:18.727 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:24:18.727 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:24:18.728 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.729 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.730 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.731 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.731 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:24:18.731 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:24:18.731 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.731 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.731 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.731 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:24:18.731 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:24:18.731 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.731 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.731 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.731 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:24:18.731 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:24:18.731 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.731 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.731 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:24:18.731 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:24:18.731 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:24:18.731 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.731 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.731 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:24:18.731 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:24:18.731 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:24:18.731 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.731 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.731 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:24:18.731 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:24:18.731 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:24:18.731 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.731 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.731 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.731 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:24:18.731 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:24:18.731 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.731 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.731 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.731 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:24:18.731 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:24:18.731 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.731 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.731 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.731 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:24:18.731 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:24:18.731 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.731 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.731 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.731 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:24:18.731 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:24:18.731 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.731 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.731 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.731 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:24:18.731 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:24:18.731 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.731 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.731 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:24:18.731 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:24:18.731 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:24:18.731 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.731 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.731 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:24:18.731 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:24:18.731 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:24:18.731 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.731 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.731 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:24:18.731 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:24:18.731 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:24:18.731 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.731 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.731 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:24:18.731 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:24:18.731 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:24:18.731 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.731 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.731 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:24:18.731 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:24:18.731 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:24:18.731 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.731 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.731 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:24:18.731 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:24:18.731 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:24:18.731 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.731 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.731 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:24:18.731 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:24:18.731 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:24:18.731 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.731 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.731 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:24:18.731 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:24:18.731 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:24:18.731 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.731 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.731 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:24:18.731 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:24:18.731 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:24:18.731 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.731 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.731 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:24:18.731 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:24:18.731 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:24:18.731 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.731 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.731 23:05:56 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:24:18.731 23:05:56 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:24:18.731 23:05:56 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:24:18.731 23:05:56 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:24:18.731 23:05:56 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:24:18.731 23:05:56 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:24:18.731 23:05:56 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:24:18.731 23:05:56 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:24:18.731 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.731 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.731 23:05:56 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:24:18.731 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:24:18.731 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.731 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.731 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:24:18.731 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:24:18.731 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:24:18.731 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.731 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.731 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:24:18.731 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:24:18.731 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:24:18.731 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:24:18.732 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:24:18.733 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.733 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.733 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:24:18.733 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:24:18.733 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:24:18.733 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.733 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.733 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:24:18.733 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:24:18.733 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:24:18.733 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.733 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.733 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:24:18.733 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:24:18.733 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:24:18.733 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.733 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.733 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:24:18.733 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:24:18.733 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:24:18.733 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.733 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.733 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:24:18.733 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:24:18.733 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:24:18.733 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.733 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.733 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:24:18.733 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:24:18.733 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:24:18.733 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.733 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.733 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:24:18.733 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:24:18.733 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:24:18.733 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.733 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.733 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:24:18.733 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:24:18.733 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:24:18.733 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.733 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.733 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:24:18.733 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:24:18.733 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:24:18.733 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.733 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.733 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:24:18.733 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:24:18.733 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:24:18.733 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.733 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.733 23:05:56 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:24:18.733 23:05:56 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:24:18.733 23:05:56 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:24:18.733 23:05:56 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:24:18.733 23:05:56 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:24:18.733 23:05:56 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:24:18.733 23:05:56 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:24:18.733 23:05:56 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:24:18.733 23:05:56 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:24:18.733 23:05:56 nvme_fdp -- scripts/common.sh@18 -- # local i 00:24:18.733 23:05:56 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:24:18.733 23:05:56 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:24:18.733 23:05:56 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:24:18.733 23:05:56 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:24:18.733 23:05:56 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:24:18.733 23:05:56 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:24:18.733 23:05:56 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:24:18.733 23:05:56 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:24:18.733 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.733 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.733 23:05:56 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:24:18.733 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:24:18.733 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.733 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.733 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:24:18.733 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:24:18.733 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:24:18.733 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.733 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.733 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:24:18.733 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:24:18.733 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:24:18.733 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.733 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.733 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:24:18.733 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:24:18.733 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:24:18.733 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.733 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.733 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:24:18.733 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:24:18.733 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:24:18.733 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.733 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.733 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:24:18.733 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:24:18.733 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:24:18.733 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.733 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.733 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:24:18.733 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:24:18.733 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:24:18.733 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.733 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.733 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:24:18.733 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:24:18.733 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:24:18.733 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.733 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.733 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.733 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:24:18.733 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:24:18.733 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.733 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.733 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:24:18.733 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:24:18.733 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:24:18.733 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.733 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.733 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.733 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:24:18.733 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:24:18.733 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.733 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.733 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:24:18.733 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:24:18.733 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:24:18.733 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.733 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.733 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.733 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:24:18.733 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:24:18.733 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.733 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.733 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.733 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:24:18.733 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:24:18.733 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.733 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.733 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:24:18.733 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:24:18.733 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:24:18.733 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.733 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.733 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:24:18.733 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:24:18.733 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:24:18.733 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.733 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.734 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:24:18.735 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:24:18.736 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:24:18.736 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.736 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.736 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.736 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:24:18.736 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:24:18.736 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.736 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.736 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.736 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:24:18.736 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:24:18.736 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.736 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.736 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.736 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:24:18.736 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:24:18.736 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.736 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.736 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.736 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:24:18.736 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:24:18.736 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.736 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.736 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.736 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:24:18.736 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:24:18.736 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.736 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.736 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.736 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:24:18.736 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:24:18.736 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.736 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.736 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:24:18.736 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:24:18.736 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:24:18.736 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.736 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.736 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:24:18.736 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:24:18.736 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:24:18.736 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.736 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.736 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:24:18.736 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:24:18.736 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:24:18.736 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.736 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.736 23:05:56 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:24:18.736 23:05:56 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:24:18.736 23:05:56 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:24:18.736 23:05:56 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:24:18.736 23:05:56 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:24:18.736 23:05:56 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:24:18.736 23:05:56 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:24:18.736 23:05:56 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:24:18.736 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.736 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.736 23:05:56 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:24:18.736 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:24:18.736 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.736 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.736 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:24:18.736 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:24:18.736 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:24:18.736 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.736 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.736 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:24:18.736 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:24:18.736 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:24:18.736 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.736 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.736 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:24:18.736 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:24:18.736 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:24:18.736 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.736 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.736 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:24:18.736 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:24:18.736 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:24:18.736 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.736 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.736 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:24:18.736 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:24:18.736 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:24:18.736 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.736 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.736 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:24:18.736 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:24:18.736 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:24:18.736 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.736 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.736 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:24:18.736 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:24:18.736 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:24:18.736 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.736 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.736 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:24:18.736 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:24:18.736 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:24:18.736 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.736 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.736 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.736 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:24:18.736 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:24:18.736 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.736 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.736 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.736 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:24:18.736 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:24:18.736 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.736 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.736 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.736 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:24:18.736 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:24:18.736 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.736 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.736 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.736 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:24:18.736 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:24:18.736 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.736 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.736 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:24:18.736 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:24:18.736 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:24:18.736 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.736 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.736 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.736 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:24:18.736 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:24:18.736 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.736 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.736 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.736 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:24:18.736 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:24:18.736 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.736 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.736 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.736 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.737 23:05:56 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:24:18.738 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:24:18.739 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.739 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.739 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.739 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:24:18.739 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:24:18.739 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.739 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.739 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.739 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:24:18.739 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:24:18.739 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.739 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.739 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:24:18.739 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:24:18.739 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:24:18.739 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.739 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.739 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:24:18.739 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:24:18.739 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:24:18.739 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.739 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.739 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:24:18.739 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:24:18.739 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:24:18.739 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.739 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.739 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:24:18.739 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:24:18.739 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:24:18.739 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.739 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.739 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:24:18.739 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:24:18.739 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:24:18.739 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.739 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.739 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:24:18.739 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:24:18.739 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:24:18.739 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.739 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.739 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:24:18.739 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:24:18.739 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:24:18.739 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.739 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.739 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:24:18.739 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:24:18.739 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:24:18.739 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.739 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.739 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:24:18.739 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:24:18.739 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:24:18.739 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.739 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.739 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:24:18.739 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:24:18.739 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:24:18.739 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.739 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.739 23:05:56 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:24:18.739 23:05:56 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:24:18.739 23:05:56 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:24:18.739 23:05:56 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:24:18.739 23:05:56 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:24:18.739 23:05:56 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:24:18.739 23:05:56 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:24:18.739 23:05:56 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:24:18.739 23:05:56 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:24:18.739 23:05:56 nvme_fdp -- scripts/common.sh@18 -- # local i 00:24:18.739 23:05:56 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:24:18.739 23:05:56 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:24:18.739 23:05:56 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:24:18.739 23:05:56 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:24:18.739 23:05:56 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:24:18.739 23:05:56 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:24:18.739 23:05:56 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:24:18.739 23:05:56 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:24:18.739 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.739 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.739 23:05:56 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:24:18.739 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:24:18.739 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.739 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.739 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:24:18.739 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:24:18.739 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:24:18.739 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.739 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.739 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:24:18.739 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:24:18.739 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:24:18.739 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.739 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.739 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:24:18.739 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:24:18.739 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:24:18.739 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.739 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.739 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:24:18.739 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:24:18.739 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:24:18.739 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.739 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.739 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:24:18.739 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:24:18.739 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:24:18.739 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.739 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.739 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:24:18.739 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:24:18.739 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:24:18.739 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.739 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.739 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:24:18.739 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:24:18.739 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:24:18.739 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.739 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.739 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.739 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:24:18.739 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:24:18.739 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.739 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.739 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:24:18.739 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:24:18.739 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:24:18.739 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.739 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.739 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.739 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:24:18.739 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:24:18.739 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.739 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.739 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:24:18.739 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:24:18.739 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:24:18.739 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.739 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.739 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.739 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:24:18.739 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:24:18.739 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.739 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.739 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.739 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:24:18.739 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:24:18.739 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.739 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.740 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:24:18.741 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:24:18.741 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.741 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.741 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.741 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:24:18.741 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:24:18.741 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.741 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.741 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.741 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:24:18.741 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:24:18.741 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.741 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.741 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.741 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:24:18.741 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:24:18.741 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.741 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.741 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.741 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:24:18.741 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:24:18.741 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.741 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.741 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.741 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:24:18.741 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:24:18.741 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.741 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.741 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.741 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:24:18.741 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:24:18.741 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.741 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.741 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.741 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:24:18.741 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:24:18.741 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.741 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.741 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.741 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:24:18.741 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:24:18.741 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.741 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.741 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.741 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:24:18.741 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:24:18.741 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.741 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.741 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.741 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:24:18.741 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:24:18.741 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.741 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.741 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.741 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:24:18.741 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:24:18.741 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.741 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.741 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.741 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:24:18.741 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:24:18.741 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.741 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.741 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.741 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:24:18.741 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:24:18.741 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.741 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.741 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.741 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:24:18.741 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:24:18.741 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.741 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.741 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:24:18.741 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:24:18.741 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:24:18.741 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.741 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.741 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:24:18.741 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:24:18.741 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:24:18.741 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.741 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.741 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.741 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:24:18.741 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:24:18.741 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.741 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.741 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:24:18.741 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:24:18.741 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:24:18.741 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.741 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.741 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:24:18.741 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:24:18.741 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:24:18.741 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.741 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.741 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.741 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:24:18.741 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:24:18.741 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.741 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.741 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.741 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:24:18.741 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:24:18.741 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.741 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.741 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:24:18.741 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:24:18.741 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:24:18.741 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.741 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.741 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.741 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:24:18.741 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:24:18.741 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.741 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.741 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.741 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:24:18.741 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:24:18.741 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.741 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.741 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.741 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:24:18.741 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:24:18.741 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.741 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.741 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.741 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:24:18.741 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:24:18.741 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.741 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.741 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.741 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:24:18.741 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:24:18.741 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.741 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.741 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:24:18.741 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:24:18.741 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:24:18.741 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.741 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.741 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:24:18.741 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.742 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.743 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:24:18.744 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:24:18.745 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.745 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.745 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.745 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:24:18.745 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:24:18.745 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.745 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.745 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.745 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:24:18.745 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:24:18.745 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.745 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.745 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:24:18.745 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:24:18.745 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:24:18.745 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.745 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.745 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:24:18.745 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:24:18.745 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:24:18.745 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.745 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.745 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:24:18.745 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:24:18.745 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:24:18.745 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.745 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.745 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:24:18.745 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:24:18.745 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:24:18.745 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.745 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.745 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:24:18.745 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:24:18.745 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:24:18.745 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.745 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.745 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:24:18.745 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:24:18.745 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:24:18.745 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.745 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.745 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:24:18.745 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:24:18.745 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:24:18.745 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.745 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.745 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:24:18.745 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:24:18.745 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:24:18.745 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.745 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.745 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:24:18.745 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:24:18.745 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:24:18.745 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.745 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.745 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:24:18.745 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:24:18.745 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:24:18.745 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.745 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.745 23:05:56 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:24:18.745 23:05:56 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:24:18.745 23:05:56 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:24:18.745 23:05:56 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:24:18.745 23:05:56 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:24:18.745 23:05:56 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:24:18.745 23:05:56 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:24:18.745 23:05:56 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:24:18.745 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.745 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.745 23:05:56 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:24:18.745 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:24:18.745 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.745 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.745 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:24:18.745 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:24:18.745 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:24:18.745 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.745 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.745 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:24:18.745 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:24:18.745 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:24:18.745 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.745 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.745 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:24:18.745 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:24:18.745 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:24:18.745 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.745 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.745 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:24:18.745 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:24:18.745 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:24:18.745 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.745 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.745 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:24:18.745 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:24:18.745 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:24:18.745 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.745 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.745 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:24:18.745 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:24:18.745 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:24:18.745 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.745 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.745 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:24:18.745 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:24:18.745 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:24:18.745 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.745 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.745 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:24:18.745 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:24:18.745 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:24:18.745 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.745 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.745 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.745 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:24:18.745 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:24:18.745 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.745 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.745 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.745 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:24:18.745 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:24:18.745 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.745 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.745 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.745 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:24:18.745 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:24:18.745 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.745 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.745 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:24:18.746 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:24:18.747 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.748 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.749 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:24:18.749 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:24:18.749 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.749 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.749 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:24:18.749 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:24:18.749 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:24:18.749 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.749 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.749 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.749 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:24:18.749 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:24:18.749 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.749 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.749 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.749 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:24:18.749 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:24:18.749 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.749 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.749 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.749 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:24:18.749 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:24:18.749 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.749 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.749 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.749 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:24:18.749 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:24:18.749 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.749 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.749 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.749 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:24:18.749 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:24:18.749 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.749 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.749 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.749 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:24:18.749 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:24:18.749 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.749 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.749 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.749 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:24:18.749 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:24:18.749 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.749 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.749 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.749 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:24:18.749 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:24:18.749 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.749 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.749 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.749 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:24:18.749 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:24:18.749 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.749 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.749 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.749 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:24:18.749 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:24:18.749 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.749 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.749 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.749 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:24:18.749 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:24:18.749 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.749 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.749 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.749 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:24:18.749 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:24:18.749 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.749 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.749 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.749 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:24:18.749 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:24:18.749 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.749 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.749 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:24:18.749 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:24:18.749 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:24:18.749 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.749 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.749 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:24:18.749 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:24:18.749 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:24:18.749 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.749 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.749 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:24:18.749 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:24:18.749 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:24:18.749 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.749 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.749 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.749 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:24:18.749 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:24:18.749 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.749 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.749 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.749 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:24:18.749 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:24:18.749 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.749 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.749 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.749 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:24:18.749 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:24:18.749 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.749 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.749 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.749 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:24:18.749 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:24:18.749 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.749 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.749 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.749 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:24:18.749 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:24:18.749 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.749 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.749 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:24:18.749 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:24:18.749 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:24:18.749 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.749 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.749 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:24:18.749 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:24:18.749 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:24:18.749 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.749 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.749 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:24:18.749 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:24:18.749 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:24:18.749 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.749 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.749 23:05:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:24:18.749 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:24:18.749 23:05:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:24:18.749 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.749 23:05:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.749 23:05:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:24:18.749 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:24:18.749 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:24:18.749 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.749 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.749 23:05:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:24:18.749 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:24:18.749 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:24:18.749 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.749 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.749 23:05:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:24:18.749 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:24:18.749 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:24:18.749 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.749 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.749 23:05:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:24:18.749 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:24:18.749 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:24:18.749 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.749 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.749 23:05:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:24:18.749 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:24:18.749 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:24:18.749 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.749 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.749 23:05:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:24:18.749 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:24:18.749 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:24:18.749 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.749 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:24:18.750 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:24:18.751 23:05:57 nvme_fdp -- scripts/common.sh@18 -- # local i 00:24:18.751 23:05:57 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:24:18.751 23:05:57 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:24:18.751 23:05:57 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.751 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.752 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:24:18.753 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.754 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.754 23:05:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.754 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:24:18.754 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:24:18.754 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.754 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.754 23:05:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.754 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:24:18.754 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:24:18.754 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.754 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.754 23:05:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:18.754 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:24:18.754 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:24:18.754 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.754 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.754 23:05:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:24:18.754 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:24:18.754 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:24:18.754 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.754 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.754 23:05:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:24:18.754 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:24:18.754 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:24:18.754 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.754 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.754 23:05:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:24:18.754 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:24:18.754 23:05:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:24:18.754 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:18.754 23:05:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:18.754 23:05:57 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:24:18.754 23:05:57 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:24:18.754 23:05:57 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:24:18.754 23:05:57 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:24:18.754 23:05:57 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:24:18.754 23:05:57 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:24:18.754 23:05:57 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:24:18.754 23:05:57 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:24:18.754 23:05:57 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:24:18.754 23:05:57 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:24:18.754 23:05:57 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:24:18.754 23:05:57 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:24:18.754 23:05:57 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:24:18.754 23:05:57 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:24:18.754 23:05:57 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:24:18.754 23:05:57 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:24:18.754 23:05:57 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:24:18.754 23:05:57 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:24:18.754 23:05:57 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:24:18.754 23:05:57 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:24:18.754 23:05:57 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:24:18.754 23:05:57 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:24:18.754 23:05:57 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:24:18.754 23:05:57 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:24:18.754 23:05:57 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:24:18.754 23:05:57 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:24:18.754 23:05:57 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:24:18.754 23:05:57 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:24:18.754 23:05:57 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:24:18.754 23:05:57 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:24:18.754 23:05:57 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:24:18.754 23:05:57 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:24:18.754 23:05:57 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:24:18.754 23:05:57 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:24:18.754 23:05:57 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:24:18.754 23:05:57 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:24:18.754 23:05:57 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:24:18.754 23:05:57 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:24:18.754 23:05:57 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:24:18.754 23:05:57 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:24:18.754 23:05:57 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:24:18.754 23:05:57 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:24:18.754 23:05:57 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:24:18.754 23:05:57 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:24:18.754 23:05:57 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:24:18.754 23:05:57 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:24:18.754 23:05:57 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:24:18.754 23:05:57 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:24:18.754 23:05:57 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:24:18.754 23:05:57 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:24:18.754 23:05:57 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:24:18.754 23:05:57 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:24:18.754 23:05:57 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:24:18.754 23:05:57 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:24:18.754 23:05:57 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:24:18.754 23:05:57 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:24:18.754 23:05:57 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:24:18.754 23:05:57 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:24:18.754 23:05:57 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:24:18.754 23:05:57 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:24:18.754 23:05:57 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:24:18.754 23:05:57 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:24:18.754 23:05:57 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:24:18.754 23:05:57 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:24:18.754 23:05:57 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:24:18.754 23:05:57 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:24:18.754 23:05:57 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:24:18.754 23:05:57 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:24:18.754 23:05:57 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:24:18.754 23:05:57 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:24:18.754 23:05:57 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:24:18.754 23:05:57 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:24:18.754 23:05:57 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:24:19.012 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:19.577 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:24:19.577 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:24:19.577 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:24:19.577 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:24:19.836 23:05:58 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:24:19.836 23:05:58 nvme_fdp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:24:19.836 23:05:58 nvme_fdp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:19.836 23:05:58 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:24:19.836 ************************************ 00:24:19.836 START TEST nvme_flexible_data_placement 00:24:19.836 ************************************ 00:24:19.836 23:05:58 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:24:19.836 Initializing NVMe Controllers 00:24:19.836 Attaching to 0000:00:13.0 00:24:19.836 Controller supports FDP Attached to 0000:00:13.0 00:24:19.836 Namespace ID: 1 Endurance Group ID: 1 00:24:19.836 Initialization complete. 00:24:19.836 00:24:19.836 ================================== 00:24:19.836 == FDP tests for Namespace: #01 == 00:24:19.836 ================================== 00:24:19.836 00:24:19.836 Get Feature: FDP: 00:24:19.836 ================= 00:24:19.836 Enabled: Yes 00:24:19.836 FDP configuration Index: 0 00:24:19.836 00:24:19.836 FDP configurations log page 00:24:19.836 =========================== 00:24:19.836 Number of FDP configurations: 1 00:24:19.836 Version: 0 00:24:19.836 Size: 112 00:24:19.836 FDP Configuration Descriptor: 0 00:24:19.836 Descriptor Size: 96 00:24:19.836 Reclaim Group Identifier format: 2 00:24:19.836 FDP Volatile Write Cache: Not Present 00:24:19.836 FDP Configuration: Valid 00:24:19.836 Vendor Specific Size: 0 00:24:19.836 Number of Reclaim Groups: 2 00:24:19.836 Number of Recalim Unit Handles: 8 00:24:19.836 Max Placement Identifiers: 128 00:24:19.836 Number of Namespaces Suppprted: 256 00:24:19.836 Reclaim unit Nominal Size: 6000000 bytes 00:24:19.836 Estimated Reclaim Unit Time Limit: Not Reported 00:24:19.836 RUH Desc #000: RUH Type: Initially Isolated 00:24:19.836 RUH Desc #001: RUH Type: Initially Isolated 00:24:19.836 RUH Desc #002: RUH Type: Initially Isolated 00:24:19.836 RUH Desc #003: RUH Type: Initially Isolated 00:24:19.836 RUH Desc #004: RUH Type: Initially Isolated 00:24:19.836 RUH Desc #005: RUH Type: Initially Isolated 00:24:19.836 RUH Desc #006: RUH Type: Initially Isolated 00:24:19.836 RUH Desc #007: RUH Type: Initially Isolated 00:24:19.836 00:24:19.836 FDP reclaim unit handle usage log page 00:24:19.836 ====================================== 00:24:19.836 Number of Reclaim Unit Handles: 8 00:24:19.836 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:24:19.836 RUH Usage Desc #001: RUH Attributes: Unused 00:24:19.836 RUH Usage Desc #002: RUH Attributes: Unused 00:24:19.836 RUH Usage Desc #003: RUH Attributes: Unused 00:24:19.836 RUH Usage Desc #004: RUH Attributes: Unused 00:24:19.836 RUH Usage Desc #005: RUH Attributes: Unused 00:24:19.836 RUH Usage Desc #006: RUH Attributes: Unused 00:24:19.836 RUH Usage Desc #007: RUH Attributes: Unused 00:24:19.836 00:24:19.836 FDP statistics log page 00:24:19.836 ======================= 00:24:19.836 Host bytes with metadata written: 849350656 00:24:19.836 Media bytes with metadata written: 849518592 00:24:19.836 Media bytes erased: 0 00:24:19.836 00:24:19.836 FDP Reclaim unit handle status 00:24:19.836 ============================== 00:24:19.836 Number of RUHS descriptors: 2 00:24:19.836 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x00000000000035ff 00:24:19.836 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:24:19.836 00:24:19.836 FDP write on placement id: 0 success 00:24:19.836 00:24:19.836 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:24:19.836 00:24:19.836 IO mgmt send: RUH update for Placement ID: #0 Success 00:24:19.836 00:24:19.836 Get Feature: FDP Events for Placement handle: #0 00:24:19.836 ======================== 00:24:19.836 Number of FDP Events: 6 00:24:19.836 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:24:19.836 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:24:19.836 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:24:19.836 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:24:19.836 FDP Event: #4 Type: Media Reallocated Enabled: No 00:24:19.836 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:24:19.836 00:24:19.836 FDP events log page 00:24:19.836 =================== 00:24:19.836 Number of FDP events: 1 00:24:19.836 FDP Event #0: 00:24:19.836 Event Type: RU Not Written to Capacity 00:24:19.836 Placement Identifier: Valid 00:24:19.836 NSID: Valid 00:24:19.836 Location: Valid 00:24:19.836 Placement Identifier: 0 00:24:19.836 Event Timestamp: c 00:24:19.836 Namespace Identifier: 1 00:24:19.836 Reclaim Group Identifier: 0 00:24:19.836 Reclaim Unit Handle Identifier: 0 00:24:19.836 00:24:19.836 FDP test passed 00:24:20.094 00:24:20.094 real 0m0.245s 00:24:20.094 user 0m0.081s 00:24:20.094 sys 0m0.063s 00:24:20.094 23:05:58 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:20.094 23:05:58 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:24:20.094 ************************************ 00:24:20.094 END TEST nvme_flexible_data_placement 00:24:20.094 ************************************ 00:24:20.094 ************************************ 00:24:20.094 END TEST nvme_fdp 00:24:20.094 00:24:20.094 real 0m7.450s 00:24:20.094 user 0m1.030s 00:24:20.094 sys 0m1.380s 00:24:20.094 23:05:58 nvme_fdp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:20.094 23:05:58 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:24:20.094 ************************************ 00:24:20.094 23:05:58 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:24:20.094 23:05:58 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:24:20.094 23:05:58 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:20.094 23:05:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:20.094 23:05:58 -- common/autotest_common.sh@10 -- # set +x 00:24:20.094 ************************************ 00:24:20.094 START TEST nvme_rpc 00:24:20.094 ************************************ 00:24:20.094 23:05:58 nvme_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:24:20.094 * Looking for test storage... 00:24:20.094 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:24:20.094 23:05:58 nvme_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:20.094 23:05:58 nvme_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:20.094 23:05:58 nvme_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:24:20.094 23:05:58 nvme_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:20.094 23:05:58 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:20.094 23:05:58 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:20.094 23:05:58 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:20.094 23:05:58 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:24:20.094 23:05:58 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:24:20.094 23:05:58 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:24:20.094 23:05:58 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:24:20.094 23:05:58 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:24:20.094 23:05:58 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:24:20.094 23:05:58 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:24:20.094 23:05:58 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:20.094 23:05:58 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:24:20.094 23:05:58 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:24:20.094 23:05:58 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:20.094 23:05:58 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:20.094 23:05:58 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:24:20.094 23:05:58 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:24:20.094 23:05:58 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:20.094 23:05:58 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:24:20.094 23:05:58 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:24:20.094 23:05:58 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:24:20.094 23:05:58 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:24:20.094 23:05:58 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:20.094 23:05:58 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:24:20.094 23:05:58 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:24:20.094 23:05:58 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:20.094 23:05:58 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:20.094 23:05:58 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:24:20.094 23:05:58 nvme_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:20.094 23:05:58 nvme_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:20.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:20.094 --rc genhtml_branch_coverage=1 00:24:20.094 --rc genhtml_function_coverage=1 00:24:20.094 --rc genhtml_legend=1 00:24:20.094 --rc geninfo_all_blocks=1 00:24:20.094 --rc geninfo_unexecuted_blocks=1 00:24:20.094 00:24:20.094 ' 00:24:20.094 23:05:58 nvme_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:20.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:20.094 --rc genhtml_branch_coverage=1 00:24:20.094 --rc genhtml_function_coverage=1 00:24:20.095 --rc genhtml_legend=1 00:24:20.095 --rc geninfo_all_blocks=1 00:24:20.095 --rc geninfo_unexecuted_blocks=1 00:24:20.095 00:24:20.095 ' 00:24:20.095 23:05:58 nvme_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:20.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:20.095 --rc genhtml_branch_coverage=1 00:24:20.095 --rc genhtml_function_coverage=1 00:24:20.095 --rc genhtml_legend=1 00:24:20.095 --rc geninfo_all_blocks=1 00:24:20.095 --rc geninfo_unexecuted_blocks=1 00:24:20.095 00:24:20.095 ' 00:24:20.095 23:05:58 nvme_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:20.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:20.095 --rc genhtml_branch_coverage=1 00:24:20.095 --rc genhtml_function_coverage=1 00:24:20.095 --rc genhtml_legend=1 00:24:20.095 --rc geninfo_all_blocks=1 00:24:20.095 --rc geninfo_unexecuted_blocks=1 00:24:20.095 00:24:20.095 ' 00:24:20.095 23:05:58 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:20.095 23:05:58 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:24:20.095 23:05:58 nvme_rpc -- common/autotest_common.sh@1509 -- # bdfs=() 00:24:20.095 23:05:58 nvme_rpc -- common/autotest_common.sh@1509 -- # local bdfs 00:24:20.095 23:05:58 nvme_rpc -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:24:20.095 23:05:58 nvme_rpc -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:24:20.095 23:05:58 nvme_rpc -- common/autotest_common.sh@1498 -- # bdfs=() 00:24:20.095 23:05:58 nvme_rpc -- common/autotest_common.sh@1498 -- # local bdfs 00:24:20.095 23:05:58 nvme_rpc -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:24:20.095 23:05:58 nvme_rpc -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:24:20.095 23:05:58 nvme_rpc -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:24:20.353 23:05:58 nvme_rpc -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:24:20.353 23:05:58 nvme_rpc -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:24:20.353 23:05:58 nvme_rpc -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:24:20.353 23:05:58 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:24:20.353 23:05:58 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=66081 00:24:20.353 23:05:58 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:24:20.353 23:05:58 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:24:20.353 23:05:58 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 66081 00:24:20.353 23:05:58 nvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 66081 ']' 00:24:20.353 23:05:58 nvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:20.353 23:05:58 nvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:20.353 23:05:58 nvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:20.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:20.353 23:05:58 nvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:20.353 23:05:58 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:20.353 [2024-12-09 23:05:58.639457] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:24:20.353 [2024-12-09 23:05:58.639585] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66081 ] 00:24:20.353 [2024-12-09 23:05:58.805248] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:20.610 [2024-12-09 23:05:58.929991] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:20.610 [2024-12-09 23:05:58.930264] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:21.174 23:05:59 nvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:21.174 23:05:59 nvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:24:21.174 23:05:59 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:24:21.431 Nvme0n1 00:24:21.431 23:05:59 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:24:21.431 23:05:59 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:24:21.688 request: 00:24:21.688 { 00:24:21.688 "bdev_name": "Nvme0n1", 00:24:21.688 "filename": "non_existing_file", 00:24:21.688 "method": "bdev_nvme_apply_firmware", 00:24:21.688 "req_id": 1 00:24:21.688 } 00:24:21.688 Got JSON-RPC error response 00:24:21.688 response: 00:24:21.688 { 00:24:21.688 "code": -32603, 00:24:21.688 "message": "open file failed." 00:24:21.688 } 00:24:21.688 23:06:00 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:24:21.688 23:06:00 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:24:21.688 23:06:00 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:24:21.945 23:06:00 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:24:21.945 23:06:00 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 66081 00:24:21.945 23:06:00 nvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 66081 ']' 00:24:21.945 23:06:00 nvme_rpc -- common/autotest_common.sh@958 -- # kill -0 66081 00:24:21.945 23:06:00 nvme_rpc -- common/autotest_common.sh@959 -- # uname 00:24:21.946 23:06:00 nvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:21.946 23:06:00 nvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66081 00:24:21.946 23:06:00 nvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:21.946 23:06:00 nvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:21.946 killing process with pid 66081 00:24:21.946 23:06:00 nvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66081' 00:24:21.946 23:06:00 nvme_rpc -- common/autotest_common.sh@973 -- # kill 66081 00:24:21.946 23:06:00 nvme_rpc -- common/autotest_common.sh@978 -- # wait 66081 00:24:23.317 00:24:23.317 real 0m3.350s 00:24:23.317 user 0m6.420s 00:24:23.317 sys 0m0.488s 00:24:23.317 23:06:01 nvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:23.317 23:06:01 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:23.317 ************************************ 00:24:23.317 END TEST nvme_rpc 00:24:23.317 ************************************ 00:24:23.317 23:06:01 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:24:23.317 23:06:01 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:23.317 23:06:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:23.317 23:06:01 -- common/autotest_common.sh@10 -- # set +x 00:24:23.317 ************************************ 00:24:23.317 START TEST nvme_rpc_timeouts 00:24:23.317 ************************************ 00:24:23.317 23:06:01 nvme_rpc_timeouts -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:24:23.576 * Looking for test storage... 00:24:23.576 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:24:23.576 23:06:01 nvme_rpc_timeouts -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:23.576 23:06:01 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:23.576 23:06:01 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # lcov --version 00:24:23.576 23:06:01 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:23.576 23:06:01 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:23.576 23:06:01 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:23.576 23:06:01 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:23.576 23:06:01 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:24:23.576 23:06:01 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:24:23.576 23:06:01 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:24:23.576 23:06:01 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:24:23.576 23:06:01 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:24:23.576 23:06:01 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:24:23.576 23:06:01 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:24:23.576 23:06:01 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:23.576 23:06:01 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:24:23.576 23:06:01 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:24:23.576 23:06:01 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:23.576 23:06:01 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:23.576 23:06:01 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:24:23.576 23:06:01 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:24:23.576 23:06:01 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:23.576 23:06:01 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:24:23.576 23:06:01 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:24:23.576 23:06:01 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:24:23.576 23:06:01 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:24:23.576 23:06:01 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:23.576 23:06:01 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:24:23.576 23:06:01 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:24:23.576 23:06:01 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:23.576 23:06:01 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:23.576 23:06:01 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:24:23.576 23:06:01 nvme_rpc_timeouts -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:23.576 23:06:01 nvme_rpc_timeouts -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:23.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:23.576 --rc genhtml_branch_coverage=1 00:24:23.576 --rc genhtml_function_coverage=1 00:24:23.576 --rc genhtml_legend=1 00:24:23.576 --rc geninfo_all_blocks=1 00:24:23.576 --rc geninfo_unexecuted_blocks=1 00:24:23.576 00:24:23.576 ' 00:24:23.576 23:06:01 nvme_rpc_timeouts -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:23.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:23.576 --rc genhtml_branch_coverage=1 00:24:23.576 --rc genhtml_function_coverage=1 00:24:23.576 --rc genhtml_legend=1 00:24:23.576 --rc geninfo_all_blocks=1 00:24:23.576 --rc geninfo_unexecuted_blocks=1 00:24:23.576 00:24:23.576 ' 00:24:23.576 23:06:01 nvme_rpc_timeouts -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:23.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:23.576 --rc genhtml_branch_coverage=1 00:24:23.576 --rc genhtml_function_coverage=1 00:24:23.576 --rc genhtml_legend=1 00:24:23.576 --rc geninfo_all_blocks=1 00:24:23.576 --rc geninfo_unexecuted_blocks=1 00:24:23.576 00:24:23.576 ' 00:24:23.576 23:06:01 nvme_rpc_timeouts -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:23.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:23.576 --rc genhtml_branch_coverage=1 00:24:23.576 --rc genhtml_function_coverage=1 00:24:23.576 --rc genhtml_legend=1 00:24:23.576 --rc geninfo_all_blocks=1 00:24:23.576 --rc geninfo_unexecuted_blocks=1 00:24:23.576 00:24:23.576 ' 00:24:23.576 23:06:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:23.576 23:06:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_66146 00:24:23.576 23:06:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_66146 00:24:23.576 23:06:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=66178 00:24:23.576 23:06:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:24:23.576 23:06:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:24:23.576 23:06:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 66178 00:24:23.576 23:06:01 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # '[' -z 66178 ']' 00:24:23.576 23:06:01 nvme_rpc_timeouts -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:23.576 23:06:01 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:23.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:23.576 23:06:01 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:23.576 23:06:01 nvme_rpc_timeouts -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:23.576 23:06:01 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:24:23.576 [2024-12-09 23:06:01.976334] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:24:23.576 [2024-12-09 23:06:01.976471] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66178 ] 00:24:23.834 [2024-12-09 23:06:02.136599] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:23.834 [2024-12-09 23:06:02.236825] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:23.834 [2024-12-09 23:06:02.236978] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:24.419 23:06:02 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:24.419 23:06:02 nvme_rpc_timeouts -- common/autotest_common.sh@868 -- # return 0 00:24:24.420 Checking default timeout settings: 00:24:24.420 23:06:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:24:24.420 23:06:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:24:25.017 Making settings changes with rpc: 00:24:25.017 23:06:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:24:25.017 23:06:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:24:25.017 Check default vs. modified settings: 00:24:25.017 23:06:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:24:25.017 23:06:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:24:25.274 23:06:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:24:25.274 23:06:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:24:25.531 23:06:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_66146 00:24:25.531 23:06:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:24:25.531 23:06:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:24:25.531 23:06:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:24:25.531 23:06:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_66146 00:24:25.531 23:06:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:24:25.531 23:06:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:24:25.531 23:06:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:24:25.531 23:06:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:24:25.531 Setting action_on_timeout is changed as expected. 00:24:25.531 23:06:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:24:25.531 23:06:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:24:25.531 23:06:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_66146 00:24:25.531 23:06:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:24:25.531 23:06:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:24:25.531 23:06:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:24:25.531 23:06:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_66146 00:24:25.531 23:06:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:24:25.531 23:06:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:24:25.531 23:06:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:24:25.531 Setting timeout_us is changed as expected. 00:24:25.531 23:06:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:24:25.531 23:06:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:24:25.531 23:06:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:24:25.531 23:06:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:24:25.531 23:06:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_66146 00:24:25.531 23:06:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:24:25.531 23:06:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:24:25.531 23:06:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_66146 00:24:25.531 23:06:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:24:25.531 23:06:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:24:25.531 23:06:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:24:25.531 23:06:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:24:25.531 Setting timeout_admin_us is changed as expected. 00:24:25.531 23:06:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:24:25.531 23:06:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:24:25.532 23:06:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_66146 /tmp/settings_modified_66146 00:24:25.532 23:06:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 66178 00:24:25.532 23:06:03 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # '[' -z 66178 ']' 00:24:25.532 23:06:03 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # kill -0 66178 00:24:25.532 23:06:03 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # uname 00:24:25.532 23:06:03 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:25.532 23:06:03 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66178 00:24:25.532 23:06:03 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:25.532 23:06:03 nvme_rpc_timeouts -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:25.532 23:06:03 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66178' 00:24:25.532 killing process with pid 66178 00:24:25.532 23:06:03 nvme_rpc_timeouts -- common/autotest_common.sh@973 -- # kill 66178 00:24:25.532 23:06:03 nvme_rpc_timeouts -- common/autotest_common.sh@978 -- # wait 66178 00:24:26.919 RPC TIMEOUT SETTING TEST PASSED. 00:24:26.919 23:06:05 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:24:26.919 ************************************ 00:24:26.919 END TEST nvme_rpc_timeouts 00:24:26.919 ************************************ 00:24:26.919 00:24:26.919 real 0m3.476s 00:24:26.919 user 0m6.827s 00:24:26.919 sys 0m0.521s 00:24:26.919 23:06:05 nvme_rpc_timeouts -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:26.919 23:06:05 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:24:26.919 23:06:05 -- spdk/autotest.sh@239 -- # uname -s 00:24:26.919 23:06:05 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:24:26.919 23:06:05 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:24:26.919 23:06:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:26.919 23:06:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:26.919 23:06:05 -- common/autotest_common.sh@10 -- # set +x 00:24:26.919 ************************************ 00:24:26.919 START TEST sw_hotplug 00:24:26.919 ************************************ 00:24:26.919 23:06:05 sw_hotplug -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:24:26.919 * Looking for test storage... 00:24:26.919 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:24:26.919 23:06:05 sw_hotplug -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:26.919 23:06:05 sw_hotplug -- common/autotest_common.sh@1711 -- # lcov --version 00:24:26.919 23:06:05 sw_hotplug -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:27.177 23:06:05 sw_hotplug -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:27.177 23:06:05 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:27.177 23:06:05 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:27.177 23:06:05 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:27.177 23:06:05 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:24:27.177 23:06:05 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:24:27.177 23:06:05 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:24:27.177 23:06:05 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:24:27.177 23:06:05 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:24:27.177 23:06:05 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:24:27.177 23:06:05 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:24:27.177 23:06:05 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:27.177 23:06:05 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:24:27.177 23:06:05 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:24:27.177 23:06:05 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:27.177 23:06:05 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:27.177 23:06:05 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:24:27.177 23:06:05 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:24:27.177 23:06:05 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:27.177 23:06:05 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:24:27.177 23:06:05 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:24:27.177 23:06:05 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:24:27.177 23:06:05 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:24:27.177 23:06:05 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:27.177 23:06:05 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:24:27.177 23:06:05 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:24:27.177 23:06:05 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:27.177 23:06:05 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:27.177 23:06:05 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:24:27.177 23:06:05 sw_hotplug -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:27.177 23:06:05 sw_hotplug -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:27.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:27.177 --rc genhtml_branch_coverage=1 00:24:27.177 --rc genhtml_function_coverage=1 00:24:27.177 --rc genhtml_legend=1 00:24:27.177 --rc geninfo_all_blocks=1 00:24:27.177 --rc geninfo_unexecuted_blocks=1 00:24:27.177 00:24:27.177 ' 00:24:27.177 23:06:05 sw_hotplug -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:27.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:27.177 --rc genhtml_branch_coverage=1 00:24:27.177 --rc genhtml_function_coverage=1 00:24:27.177 --rc genhtml_legend=1 00:24:27.177 --rc geninfo_all_blocks=1 00:24:27.177 --rc geninfo_unexecuted_blocks=1 00:24:27.177 00:24:27.177 ' 00:24:27.177 23:06:05 sw_hotplug -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:27.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:27.177 --rc genhtml_branch_coverage=1 00:24:27.177 --rc genhtml_function_coverage=1 00:24:27.177 --rc genhtml_legend=1 00:24:27.177 --rc geninfo_all_blocks=1 00:24:27.177 --rc geninfo_unexecuted_blocks=1 00:24:27.177 00:24:27.177 ' 00:24:27.177 23:06:05 sw_hotplug -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:27.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:27.177 --rc genhtml_branch_coverage=1 00:24:27.177 --rc genhtml_function_coverage=1 00:24:27.177 --rc genhtml_legend=1 00:24:27.177 --rc geninfo_all_blocks=1 00:24:27.177 --rc geninfo_unexecuted_blocks=1 00:24:27.177 00:24:27.177 ' 00:24:27.177 23:06:05 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:24:27.444 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:27.444 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:24:27.444 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:24:27.444 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:24:27.444 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:24:27.444 23:06:05 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:24:27.444 23:06:05 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:24:27.444 23:06:05 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:24:27.444 23:06:05 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:24:27.444 23:06:05 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:24:27.444 23:06:05 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:24:27.444 23:06:05 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:24:27.444 23:06:05 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:24:27.444 23:06:05 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:24:27.444 23:06:05 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:24:27.444 23:06:05 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:24:27.444 23:06:05 sw_hotplug -- scripts/common.sh@233 -- # local class 00:24:27.444 23:06:05 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:24:27.444 23:06:05 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:24:27.444 23:06:05 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:24:27.444 23:06:05 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:24:27.444 23:06:05 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:24:27.444 23:06:05 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:24:27.444 23:06:05 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:24:27.444 23:06:05 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:24:27.444 23:06:05 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:24:27.444 23:06:05 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:24:27.444 23:06:05 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:24:27.444 23:06:05 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:24:27.444 23:06:05 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:24:27.444 23:06:05 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:24:27.444 23:06:05 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:24:27.444 23:06:05 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:24:27.444 23:06:05 sw_hotplug -- scripts/common.sh@18 -- # local i 00:24:27.444 23:06:05 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:24:27.444 23:06:05 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:24:27.444 23:06:05 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:24:27.444 23:06:05 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:24:27.444 23:06:05 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:24:27.444 23:06:05 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:24:27.444 23:06:05 sw_hotplug -- scripts/common.sh@18 -- # local i 00:24:27.444 23:06:05 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:24:27.444 23:06:05 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:24:27.444 23:06:05 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:24:27.444 23:06:05 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:24:27.444 23:06:05 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:24:27.444 23:06:05 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:24:27.444 23:06:05 sw_hotplug -- scripts/common.sh@18 -- # local i 00:24:27.444 23:06:05 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:24:27.444 23:06:05 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:24:27.444 23:06:05 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:24:27.444 23:06:05 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:24:27.444 23:06:05 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:24:27.444 23:06:05 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:24:27.444 23:06:05 sw_hotplug -- scripts/common.sh@18 -- # local i 00:24:27.444 23:06:05 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:24:27.444 23:06:05 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:24:27.444 23:06:05 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:24:27.444 23:06:05 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:24:27.444 23:06:05 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:24:27.444 23:06:05 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:24:27.444 23:06:05 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:24:27.444 23:06:05 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:24:27.444 23:06:05 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:24:27.444 23:06:05 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:24:27.444 23:06:05 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:24:27.444 23:06:05 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:24:27.444 23:06:05 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:24:27.444 23:06:05 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:24:27.444 23:06:05 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:24:27.444 23:06:05 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:24:27.444 23:06:05 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:24:27.444 23:06:05 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:24:27.444 23:06:05 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:24:27.444 23:06:05 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:24:27.445 23:06:05 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:24:27.445 23:06:05 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:24:27.445 23:06:05 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:24:27.445 23:06:05 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:24:27.445 23:06:05 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:24:27.445 23:06:05 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:24:27.445 23:06:05 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:24:27.445 23:06:05 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:24:27.445 23:06:05 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:24:27.736 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:27.993 Waiting for block devices as requested 00:24:27.993 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:24:27.993 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:24:27.993 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:24:28.250 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:24:33.508 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:24:33.508 23:06:11 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:24:33.508 23:06:11 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:24:33.508 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:24:33.508 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:33.508 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:24:33.765 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:24:34.022 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:24:34.022 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:24:34.022 23:06:12 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:24:34.022 23:06:12 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:24:34.279 23:06:12 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:24:34.279 23:06:12 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:24:34.279 23:06:12 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=67030 00:24:34.279 23:06:12 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:24:34.279 23:06:12 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:24:34.279 23:06:12 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:24:34.280 23:06:12 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:24:34.280 23:06:12 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:24:34.280 23:06:12 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:24:34.280 23:06:12 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:24:34.280 23:06:12 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:24:34.280 23:06:12 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 false 00:24:34.280 23:06:12 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:24:34.280 23:06:12 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:24:34.280 23:06:12 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:24:34.280 23:06:12 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:24:34.280 23:06:12 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:24:34.280 Initializing NVMe Controllers 00:24:34.280 Attaching to 0000:00:10.0 00:24:34.280 Attaching to 0000:00:11.0 00:24:34.546 Attached to 0000:00:10.0 00:24:34.546 Attached to 0000:00:11.0 00:24:34.546 Initialization complete. Starting I/O... 00:24:34.546 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:24:34.546 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:24:34.546 00:24:35.478 QEMU NVMe Ctrl (12340 ): 2251 I/Os completed (+2251) 00:24:35.478 QEMU NVMe Ctrl (12341 ): 2380 I/Os completed (+2380) 00:24:35.478 00:24:36.409 QEMU NVMe Ctrl (12340 ): 5264 I/Os completed (+3013) 00:24:36.409 QEMU NVMe Ctrl (12341 ): 5421 I/Os completed (+3041) 00:24:36.409 00:24:37.348 QEMU NVMe Ctrl (12340 ): 8357 I/Os completed (+3093) 00:24:37.348 QEMU NVMe Ctrl (12341 ): 8535 I/Os completed (+3114) 00:24:37.348 00:24:38.717 QEMU NVMe Ctrl (12340 ): 11482 I/Os completed (+3125) 00:24:38.717 QEMU NVMe Ctrl (12341 ): 11777 I/Os completed (+3242) 00:24:38.717 00:24:39.648 QEMU NVMe Ctrl (12340 ): 14565 I/Os completed (+3083) 00:24:39.648 QEMU NVMe Ctrl (12341 ): 14950 I/Os completed (+3173) 00:24:39.648 00:24:40.212 23:06:18 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:24:40.212 23:06:18 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:24:40.212 23:06:18 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:24:40.212 [2024-12-09 23:06:18.553649] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:24:40.212 Controller removed: QEMU NVMe Ctrl (12340 ) 00:24:40.212 [2024-12-09 23:06:18.555743] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:40.212 [2024-12-09 23:06:18.555823] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:40.212 [2024-12-09 23:06:18.555855] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:40.212 [2024-12-09 23:06:18.555888] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:40.212 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:24:40.212 [2024-12-09 23:06:18.559248] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:40.212 [2024-12-09 23:06:18.559323] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:40.212 [2024-12-09 23:06:18.559350] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:40.212 [2024-12-09 23:06:18.559376] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:40.212 23:06:18 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:24:40.212 23:06:18 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:24:40.212 [2024-12-09 23:06:18.575966] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:24:40.212 Controller removed: QEMU NVMe Ctrl (12341 ) 00:24:40.212 [2024-12-09 23:06:18.576843] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:40.212 [2024-12-09 23:06:18.576880] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:40.212 [2024-12-09 23:06:18.576897] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:40.212 [2024-12-09 23:06:18.576910] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:40.212 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:24:40.212 [2024-12-09 23:06:18.578284] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:40.212 [2024-12-09 23:06:18.578314] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:40.212 [2024-12-09 23:06:18.578328] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:40.212 [2024-12-09 23:06:18.578338] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:40.212 23:06:18 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:24:40.212 23:06:18 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:24:40.212 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:24:40.212 EAL: Scan for (pci) bus failed. 00:24:40.212 23:06:18 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:24:40.212 23:06:18 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:24:40.212 23:06:18 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:24:40.469 23:06:18 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:24:40.469 23:06:18 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:24:40.469 23:06:18 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:24:40.469 23:06:18 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:24:40.469 23:06:18 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:24:40.469 Attaching to 0000:00:10.0 00:24:40.469 Attached to 0000:00:10.0 00:24:40.469 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:24:40.469 00:24:40.469 23:06:18 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:24:40.469 23:06:18 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:24:40.469 23:06:18 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:24:40.469 Attaching to 0000:00:11.0 00:24:40.469 Attached to 0000:00:11.0 00:24:41.400 QEMU NVMe Ctrl (12340 ): 3261 I/Os completed (+3261) 00:24:41.400 QEMU NVMe Ctrl (12341 ): 3115 I/Os completed (+3115) 00:24:41.400 00:24:42.340 QEMU NVMe Ctrl (12340 ): 6477 I/Os completed (+3216) 00:24:42.340 QEMU NVMe Ctrl (12341 ): 6366 I/Os completed (+3251) 00:24:42.340 00:24:43.711 QEMU NVMe Ctrl (12340 ): 9602 I/Os completed (+3125) 00:24:43.711 QEMU NVMe Ctrl (12341 ): 9772 I/Os completed (+3406) 00:24:43.711 00:24:44.644 QEMU NVMe Ctrl (12340 ): 12541 I/Os completed (+2939) 00:24:44.644 QEMU NVMe Ctrl (12341 ): 12710 I/Os completed (+2938) 00:24:44.644 00:24:45.588 QEMU NVMe Ctrl (12340 ): 15551 I/Os completed (+3010) 00:24:45.588 QEMU NVMe Ctrl (12341 ): 15915 I/Os completed (+3205) 00:24:45.588 00:24:46.523 QEMU NVMe Ctrl (12340 ): 18723 I/Os completed (+3172) 00:24:46.523 QEMU NVMe Ctrl (12341 ): 19101 I/Os completed (+3186) 00:24:46.523 00:24:47.505 QEMU NVMe Ctrl (12340 ): 21866 I/Os completed (+3143) 00:24:47.505 QEMU NVMe Ctrl (12341 ): 22429 I/Os completed (+3328) 00:24:47.505 00:24:48.437 QEMU NVMe Ctrl (12340 ): 24820 I/Os completed (+2954) 00:24:48.437 QEMU NVMe Ctrl (12341 ): 25461 I/Os completed (+3032) 00:24:48.437 00:24:49.370 QEMU NVMe Ctrl (12340 ): 28390 I/Os completed (+3570) 00:24:49.370 QEMU NVMe Ctrl (12341 ): 29300 I/Os completed (+3839) 00:24:49.370 00:24:50.307 QEMU NVMe Ctrl (12340 ): 31977 I/Os completed (+3587) 00:24:50.307 QEMU NVMe Ctrl (12341 ): 33069 I/Os completed (+3769) 00:24:50.307 00:24:51.678 QEMU NVMe Ctrl (12340 ): 35205 I/Os completed (+3228) 00:24:51.678 QEMU NVMe Ctrl (12341 ): 36334 I/Os completed (+3265) 00:24:51.678 00:24:52.613 QEMU NVMe Ctrl (12340 ): 38483 I/Os completed (+3278) 00:24:52.613 QEMU NVMe Ctrl (12341 ): 39656 I/Os completed (+3322) 00:24:52.613 00:24:52.613 23:06:30 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:24:52.613 23:06:30 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:24:52.613 23:06:30 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:24:52.613 23:06:30 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:24:52.613 [2024-12-09 23:06:30.826871] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:24:52.613 Controller removed: QEMU NVMe Ctrl (12340 ) 00:24:52.613 [2024-12-09 23:06:30.827828] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:52.613 [2024-12-09 23:06:30.827874] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:52.613 [2024-12-09 23:06:30.827889] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:52.613 [2024-12-09 23:06:30.827904] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:52.614 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:24:52.614 [2024-12-09 23:06:30.829469] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:52.614 [2024-12-09 23:06:30.829509] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:52.614 [2024-12-09 23:06:30.829521] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:52.614 [2024-12-09 23:06:30.829534] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:52.614 23:06:30 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:24:52.614 23:06:30 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:24:52.614 [2024-12-09 23:06:30.844046] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:24:52.614 Controller removed: QEMU NVMe Ctrl (12341 ) 00:24:52.614 [2024-12-09 23:06:30.844969] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:52.614 [2024-12-09 23:06:30.845011] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:52.614 [2024-12-09 23:06:30.845031] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:52.614 [2024-12-09 23:06:30.845056] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:52.614 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:24:52.614 [2024-12-09 23:06:30.846503] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:52.614 [2024-12-09 23:06:30.846540] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:52.614 [2024-12-09 23:06:30.846553] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:52.614 [2024-12-09 23:06:30.846564] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:52.614 EAL: Cannot open sysfs resource 00:24:52.614 EAL: pci_scan_one(): cannot parse resource 00:24:52.614 EAL: Scan for (pci) bus failed. 00:24:52.614 23:06:30 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:24:52.614 23:06:30 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:24:52.614 23:06:30 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:24:52.614 23:06:30 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:24:52.614 23:06:30 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:24:52.614 23:06:30 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:24:52.614 23:06:31 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:24:52.614 23:06:31 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:24:52.614 23:06:31 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:24:52.614 23:06:31 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:24:52.614 Attaching to 0000:00:10.0 00:24:52.614 Attached to 0000:00:10.0 00:24:52.614 23:06:31 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:24:52.871 23:06:31 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:24:52.871 23:06:31 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:24:52.871 Attaching to 0000:00:11.0 00:24:52.871 Attached to 0000:00:11.0 00:24:53.434 QEMU NVMe Ctrl (12340 ): 2591 I/Os completed (+2591) 00:24:53.434 QEMU NVMe Ctrl (12341 ): 2346 I/Os completed (+2346) 00:24:53.434 00:24:54.366 QEMU NVMe Ctrl (12340 ): 5546 I/Os completed (+2955) 00:24:54.366 QEMU NVMe Ctrl (12341 ): 5360 I/Os completed (+3014) 00:24:54.366 00:24:55.301 QEMU NVMe Ctrl (12340 ): 8548 I/Os completed (+3002) 00:24:55.301 QEMU NVMe Ctrl (12341 ): 8450 I/Os completed (+3090) 00:24:55.301 00:24:56.691 QEMU NVMe Ctrl (12340 ): 11552 I/Os completed (+3004) 00:24:56.691 QEMU NVMe Ctrl (12341 ): 11518 I/Os completed (+3068) 00:24:56.691 00:24:57.625 QEMU NVMe Ctrl (12340 ): 14494 I/Os completed (+2942) 00:24:57.625 QEMU NVMe Ctrl (12341 ): 14543 I/Os completed (+3025) 00:24:57.625 00:24:58.584 QEMU NVMe Ctrl (12340 ): 17762 I/Os completed (+3268) 00:24:58.584 QEMU NVMe Ctrl (12341 ): 17778 I/Os completed (+3235) 00:24:58.584 00:24:59.517 QEMU NVMe Ctrl (12340 ): 21066 I/Os completed (+3304) 00:24:59.517 QEMU NVMe Ctrl (12341 ): 21163 I/Os completed (+3385) 00:24:59.517 00:25:00.454 QEMU NVMe Ctrl (12340 ): 24378 I/Os completed (+3312) 00:25:00.454 QEMU NVMe Ctrl (12341 ): 24614 I/Os completed (+3451) 00:25:00.454 00:25:01.410 QEMU NVMe Ctrl (12340 ): 27456 I/Os completed (+3078) 00:25:01.410 QEMU NVMe Ctrl (12341 ): 27741 I/Os completed (+3127) 00:25:01.410 00:25:02.346 QEMU NVMe Ctrl (12340 ): 30372 I/Os completed (+2916) 00:25:02.346 QEMU NVMe Ctrl (12341 ): 30888 I/Os completed (+3147) 00:25:02.346 00:25:03.718 QEMU NVMe Ctrl (12340 ): 33306 I/Os completed (+2934) 00:25:03.718 QEMU NVMe Ctrl (12341 ): 33908 I/Os completed (+3020) 00:25:03.718 00:25:04.652 QEMU NVMe Ctrl (12340 ): 36265 I/Os completed (+2959) 00:25:04.652 QEMU NVMe Ctrl (12341 ): 36928 I/Os completed (+3020) 00:25:04.652 00:25:04.652 23:06:43 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:25:04.652 23:06:43 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:25:04.652 23:06:43 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:25:04.652 23:06:43 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:25:04.652 [2024-12-09 23:06:43.086402] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:25:04.652 Controller removed: QEMU NVMe Ctrl (12340 ) 00:25:04.652 [2024-12-09 23:06:43.087578] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:25:04.652 [2024-12-09 23:06:43.087627] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:25:04.652 [2024-12-09 23:06:43.087645] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:25:04.652 [2024-12-09 23:06:43.087664] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:25:04.652 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:25:04.652 [2024-12-09 23:06:43.089656] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:25:04.652 [2024-12-09 23:06:43.089703] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:25:04.652 [2024-12-09 23:06:43.089720] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:25:04.652 [2024-12-09 23:06:43.089735] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:25:04.652 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:10.0/device 00:25:04.652 EAL: Scan for (pci) bus failed. 00:25:04.652 23:06:43 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:25:04.652 23:06:43 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:25:04.652 [2024-12-09 23:06:43.107924] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:25:04.652 Controller removed: QEMU NVMe Ctrl (12341 ) 00:25:04.652 [2024-12-09 23:06:43.109045] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:25:04.652 [2024-12-09 23:06:43.109090] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:25:04.652 [2024-12-09 23:06:43.109109] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:25:04.652 [2024-12-09 23:06:43.109124] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:25:04.652 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:25:04.652 [2024-12-09 23:06:43.112334] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:25:04.652 [2024-12-09 23:06:43.112370] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:25:04.652 [2024-12-09 23:06:43.112390] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:25:04.652 [2024-12-09 23:06:43.112403] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:25:04.910 23:06:43 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:25:04.910 23:06:43 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:25:04.910 23:06:43 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:25:04.910 23:06:43 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:25:04.910 23:06:43 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:25:04.910 23:06:43 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:25:04.910 23:06:43 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:25:04.910 23:06:43 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:25:04.910 23:06:43 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:25:04.910 23:06:43 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:25:04.910 Attaching to 0000:00:10.0 00:25:04.910 Attached to 0000:00:10.0 00:25:04.910 23:06:43 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:25:04.910 23:06:43 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:25:04.910 23:06:43 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:25:04.910 Attaching to 0000:00:11.0 00:25:04.910 Attached to 0000:00:11.0 00:25:04.910 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:25:04.910 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:25:05.167 [2024-12-09 23:06:43.371730] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:25:17.405 23:06:55 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:25:17.405 23:06:55 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:25:17.405 23:06:55 sw_hotplug -- common/autotest_common.sh@719 -- # time=42.81 00:25:17.405 23:06:55 sw_hotplug -- common/autotest_common.sh@720 -- # echo 42.81 00:25:17.405 23:06:55 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:25:17.405 23:06:55 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=42.81 00:25:17.405 23:06:55 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 42.81 2 00:25:17.405 remove_attach_helper took 42.81s to complete (handling 2 nvme drive(s)) 23:06:55 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:25:23.971 23:07:01 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 67030 00:25:23.971 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (67030) - No such process 00:25:23.971 23:07:01 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 67030 00:25:23.971 23:07:01 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:25:23.971 23:07:01 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:25:23.971 23:07:01 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:25:23.971 23:07:01 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=67582 00:25:23.972 23:07:01 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:25:23.972 23:07:01 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 67582 00:25:23.972 23:07:01 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:23.972 23:07:01 sw_hotplug -- common/autotest_common.sh@835 -- # '[' -z 67582 ']' 00:25:23.972 23:07:01 sw_hotplug -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:23.972 23:07:01 sw_hotplug -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:23.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:23.972 23:07:01 sw_hotplug -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:23.972 23:07:01 sw_hotplug -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:23.972 23:07:01 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:25:23.972 [2024-12-09 23:07:01.445678] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:25:23.972 [2024-12-09 23:07:01.445842] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67582 ] 00:25:23.972 [2024-12-09 23:07:01.601270] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:23.972 [2024-12-09 23:07:01.703832] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:23.972 23:07:02 sw_hotplug -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:23.972 23:07:02 sw_hotplug -- common/autotest_common.sh@868 -- # return 0 00:25:23.972 23:07:02 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:25:23.972 23:07:02 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.972 23:07:02 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:25:23.972 23:07:02 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.972 23:07:02 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:25:23.972 23:07:02 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:25:23.972 23:07:02 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:25:23.972 23:07:02 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:25:23.972 23:07:02 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:25:23.972 23:07:02 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:25:23.972 23:07:02 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:25:23.972 23:07:02 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:25:23.972 23:07:02 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:25:23.972 23:07:02 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:25:23.972 23:07:02 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:25:23.972 23:07:02 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:25:23.972 23:07:02 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:25:30.535 23:07:08 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:25:30.535 23:07:08 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:25:30.535 23:07:08 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:25:30.535 23:07:08 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:25:30.535 23:07:08 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:25:30.535 23:07:08 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:25:30.535 23:07:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:25:30.535 23:07:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:25:30.535 23:07:08 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:25:30.535 23:07:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:25:30.535 23:07:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:25:30.535 23:07:08 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.535 23:07:08 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:25:30.535 23:07:08 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.535 23:07:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:25:30.535 23:07:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:25:30.535 [2024-12-09 23:07:08.446460] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:25:30.535 [2024-12-09 23:07:08.448058] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:25:30.535 [2024-12-09 23:07:08.448099] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:25:30.535 [2024-12-09 23:07:08.448113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.535 [2024-12-09 23:07:08.448133] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:25:30.535 [2024-12-09 23:07:08.448141] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:25:30.535 [2024-12-09 23:07:08.448150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.535 [2024-12-09 23:07:08.448157] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:25:30.535 [2024-12-09 23:07:08.448165] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:25:30.535 [2024-12-09 23:07:08.448172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.535 [2024-12-09 23:07:08.448184] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:25:30.535 [2024-12-09 23:07:08.448191] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:25:30.535 [2024-12-09 23:07:08.448200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.536 23:07:08 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:25:30.536 23:07:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:25:30.536 23:07:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:25:30.536 23:07:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:25:30.536 23:07:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:25:30.536 23:07:08 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:25:30.536 23:07:08 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.536 23:07:08 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:25:30.536 [2024-12-09 23:07:08.946442] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:25:30.536 [2024-12-09 23:07:08.947862] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:25:30.536 [2024-12-09 23:07:08.947897] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:25:30.536 [2024-12-09 23:07:08.947910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.536 [2024-12-09 23:07:08.947926] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:25:30.536 [2024-12-09 23:07:08.947935] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:25:30.536 [2024-12-09 23:07:08.947943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.536 [2024-12-09 23:07:08.947952] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:25:30.536 [2024-12-09 23:07:08.947959] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:25:30.536 [2024-12-09 23:07:08.947967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.536 [2024-12-09 23:07:08.947975] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:25:30.536 [2024-12-09 23:07:08.947983] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:25:30.536 [2024-12-09 23:07:08.947991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.536 23:07:08 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.536 23:07:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:25:30.536 23:07:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:25:31.103 23:07:09 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:25:31.103 23:07:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:25:31.103 23:07:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:25:31.103 23:07:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:25:31.103 23:07:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:25:31.103 23:07:09 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.103 23:07:09 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:25:31.103 23:07:09 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:25:31.103 23:07:09 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.103 23:07:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:25:31.103 23:07:09 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:25:31.362 23:07:09 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:25:31.362 23:07:09 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:25:31.362 23:07:09 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:25:31.362 23:07:09 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:25:31.362 23:07:09 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:25:31.362 23:07:09 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:25:31.362 23:07:09 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:25:31.362 23:07:09 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:25:31.362 23:07:09 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:25:31.362 23:07:09 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:25:31.362 23:07:09 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:25:43.561 23:07:21 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:25:43.561 23:07:21 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:25:43.561 23:07:21 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:25:43.561 23:07:21 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:25:43.561 23:07:21 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:25:43.561 23:07:21 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:25:43.561 23:07:21 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.561 23:07:21 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:25:43.561 23:07:21 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.561 23:07:21 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:25:43.561 23:07:21 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:25:43.561 23:07:21 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:25:43.561 23:07:21 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:25:43.561 23:07:21 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:25:43.561 23:07:21 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:25:43.561 [2024-12-09 23:07:21.846615] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:25:43.561 23:07:21 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:25:43.561 23:07:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:25:43.561 [2024-12-09 23:07:21.848146] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:25:43.561 [2024-12-09 23:07:21.848285] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:25:43.561 [2024-12-09 23:07:21.848366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.561 [2024-12-09 23:07:21.848442] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:25:43.561 [2024-12-09 23:07:21.848530] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:25:43.561 [2024-12-09 23:07:21.848562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.561 [2024-12-09 23:07:21.848589] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:25:43.561 [2024-12-09 23:07:21.848676] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:25:43.561 [2024-12-09 23:07:21.848704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.561 [2024-12-09 23:07:21.848766] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:25:43.561 [2024-12-09 23:07:21.848798] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:25:43.561 [2024-12-09 23:07:21.848824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.561 23:07:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:25:43.561 23:07:21 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:25:43.561 23:07:21 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:25:43.561 23:07:21 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:25:43.561 23:07:21 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.561 23:07:21 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:25:43.561 23:07:21 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.561 23:07:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:25:43.561 23:07:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:25:44.131 [2024-12-09 23:07:22.346617] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:25:44.131 [2024-12-09 23:07:22.348058] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:25:44.131 [2024-12-09 23:07:22.348096] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:25:44.131 [2024-12-09 23:07:22.348111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.131 [2024-12-09 23:07:22.348128] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:25:44.131 [2024-12-09 23:07:22.348136] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:25:44.131 [2024-12-09 23:07:22.348143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.131 [2024-12-09 23:07:22.348153] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:25:44.131 [2024-12-09 23:07:22.348160] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:25:44.131 [2024-12-09 23:07:22.348168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.132 [2024-12-09 23:07:22.348175] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:25:44.132 [2024-12-09 23:07:22.348184] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:25:44.132 [2024-12-09 23:07:22.348190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.132 23:07:22 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:25:44.132 23:07:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:25:44.132 23:07:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:25:44.132 23:07:22 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:25:44.132 23:07:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:25:44.132 23:07:22 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.132 23:07:22 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:25:44.132 23:07:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:25:44.132 23:07:22 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.132 23:07:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:25:44.132 23:07:22 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:25:44.132 23:07:22 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:25:44.132 23:07:22 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:25:44.132 23:07:22 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:25:44.132 23:07:22 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:25:44.132 23:07:22 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:25:44.132 23:07:22 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:25:44.132 23:07:22 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:25:44.132 23:07:22 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:25:44.392 23:07:22 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:25:44.392 23:07:22 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:25:44.392 23:07:22 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:25:56.633 23:07:34 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:25:56.633 23:07:34 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:25:56.633 23:07:34 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:25:56.633 23:07:34 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:25:56.633 23:07:34 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:25:56.633 23:07:34 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.633 23:07:34 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:25:56.633 23:07:34 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:25:56.633 23:07:34 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.633 23:07:34 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:25:56.633 23:07:34 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:25:56.633 23:07:34 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:25:56.633 23:07:34 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:25:56.634 23:07:34 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:25:56.634 23:07:34 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:25:56.634 23:07:34 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:25:56.634 23:07:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:25:56.634 23:07:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:25:56.634 23:07:34 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:25:56.634 23:07:34 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:25:56.634 23:07:34 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:25:56.634 23:07:34 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.634 23:07:34 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:25:56.634 [2024-12-09 23:07:34.746775] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:25:56.634 [2024-12-09 23:07:34.748133] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:25:56.634 [2024-12-09 23:07:34.748166] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:25:56.634 [2024-12-09 23:07:34.748178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.634 [2024-12-09 23:07:34.748196] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:25:56.634 [2024-12-09 23:07:34.748204] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:25:56.634 [2024-12-09 23:07:34.748215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.634 [2024-12-09 23:07:34.748234] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:25:56.634 [2024-12-09 23:07:34.748244] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:25:56.634 [2024-12-09 23:07:34.748251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.634 [2024-12-09 23:07:34.748260] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:25:56.634 [2024-12-09 23:07:34.748267] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:25:56.634 [2024-12-09 23:07:34.748275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.634 23:07:34 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.634 23:07:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:25:56.634 23:07:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:25:56.895 [2024-12-09 23:07:35.246797] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:25:56.895 [2024-12-09 23:07:35.248279] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:25:56.895 [2024-12-09 23:07:35.248313] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:25:56.895 [2024-12-09 23:07:35.248326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.895 [2024-12-09 23:07:35.248341] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:25:56.895 [2024-12-09 23:07:35.248351] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:25:56.895 [2024-12-09 23:07:35.248358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.895 [2024-12-09 23:07:35.248367] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:25:56.895 [2024-12-09 23:07:35.248374] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:25:56.896 [2024-12-09 23:07:35.248384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.896 [2024-12-09 23:07:35.248392] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:25:56.896 [2024-12-09 23:07:35.248400] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:25:56.896 [2024-12-09 23:07:35.248407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.896 23:07:35 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:25:56.896 23:07:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:25:56.896 23:07:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:25:56.896 23:07:35 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:25:56.896 23:07:35 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:25:56.896 23:07:35 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:25:56.896 23:07:35 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.896 23:07:35 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:25:56.896 23:07:35 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.896 23:07:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:25:56.896 23:07:35 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:25:57.199 23:07:35 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:25:57.199 23:07:35 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:25:57.199 23:07:35 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:25:57.199 23:07:35 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:25:57.199 23:07:35 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:25:57.199 23:07:35 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:25:57.199 23:07:35 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:25:57.199 23:07:35 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:25:57.199 23:07:35 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:25:57.199 23:07:35 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:25:57.199 23:07:35 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:26:09.442 23:07:47 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:26:09.442 23:07:47 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:26:09.442 23:07:47 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:26:09.442 23:07:47 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:26:09.442 23:07:47 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:26:09.442 23:07:47 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.442 23:07:47 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:26:09.442 23:07:47 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:26:09.442 23:07:47 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.442 23:07:47 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:26:09.442 23:07:47 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:26:09.442 23:07:47 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.24 00:26:09.442 23:07:47 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.24 00:26:09.442 23:07:47 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:26:09.442 23:07:47 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.24 00:26:09.442 23:07:47 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.24 2 00:26:09.442 remove_attach_helper took 45.24s to complete (handling 2 nvme drive(s)) 23:07:47 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:26:09.442 23:07:47 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.442 23:07:47 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:26:09.442 23:07:47 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.442 23:07:47 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:26:09.442 23:07:47 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.442 23:07:47 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:26:09.442 23:07:47 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.442 23:07:47 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:26:09.442 23:07:47 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:26:09.442 23:07:47 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:26:09.442 23:07:47 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:26:09.442 23:07:47 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:26:09.442 23:07:47 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:26:09.442 23:07:47 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:26:09.442 23:07:47 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:26:09.442 23:07:47 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:26:09.442 23:07:47 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:26:09.442 23:07:47 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:26:09.442 23:07:47 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:26:09.442 23:07:47 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:26:16.082 23:07:53 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:26:16.082 23:07:53 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:26:16.082 23:07:53 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:26:16.082 23:07:53 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:26:16.082 23:07:53 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:26:16.082 23:07:53 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:26:16.082 23:07:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:26:16.082 23:07:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:26:16.082 23:07:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:26:16.082 23:07:53 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:26:16.082 23:07:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:26:16.082 23:07:53 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.082 23:07:53 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:26:16.082 23:07:53 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.082 23:07:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:26:16.082 23:07:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:26:16.082 [2024-12-09 23:07:53.710725] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:26:16.082 [2024-12-09 23:07:53.711817] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:26:16.082 [2024-12-09 23:07:53.711857] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:26:16.082 [2024-12-09 23:07:53.711869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.082 [2024-12-09 23:07:53.711888] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:26:16.082 [2024-12-09 23:07:53.711896] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:26:16.082 [2024-12-09 23:07:53.711905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.082 [2024-12-09 23:07:53.711913] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:26:16.082 [2024-12-09 23:07:53.711921] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:26:16.082 [2024-12-09 23:07:53.711928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.082 [2024-12-09 23:07:53.711937] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:26:16.082 [2024-12-09 23:07:53.711944] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:26:16.082 [2024-12-09 23:07:53.711954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.082 [2024-12-09 23:07:54.110718] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:26:16.082 [2024-12-09 23:07:54.111800] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:26:16.082 [2024-12-09 23:07:54.111832] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:26:16.082 [2024-12-09 23:07:54.111845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.082 [2024-12-09 23:07:54.111861] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:26:16.082 [2024-12-09 23:07:54.111870] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:26:16.082 [2024-12-09 23:07:54.111877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.082 [2024-12-09 23:07:54.111886] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:26:16.082 [2024-12-09 23:07:54.111893] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:26:16.082 [2024-12-09 23:07:54.111901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.082 [2024-12-09 23:07:54.111908] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:26:16.082 [2024-12-09 23:07:54.111916] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:26:16.082 [2024-12-09 23:07:54.111922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.082 23:07:54 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:26:16.082 23:07:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:26:16.082 23:07:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:26:16.082 23:07:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:26:16.082 23:07:54 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:26:16.082 23:07:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:26:16.082 23:07:54 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.082 23:07:54 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:26:16.082 23:07:54 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.082 23:07:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:26:16.082 23:07:54 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:26:16.082 23:07:54 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:26:16.082 23:07:54 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:26:16.082 23:07:54 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:26:16.082 23:07:54 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:26:16.082 23:07:54 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:26:16.082 23:07:54 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:26:16.082 23:07:54 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:26:16.082 23:07:54 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:26:16.082 23:07:54 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:26:16.082 23:07:54 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:26:16.082 23:07:54 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:26:28.400 23:08:06 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:26:28.400 23:08:06 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:26:28.400 23:08:06 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:26:28.400 23:08:06 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:26:28.400 23:08:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:26:28.400 23:08:06 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:28.400 23:08:06 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:26:28.400 23:08:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:26:28.400 23:08:06 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:28.400 23:08:06 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:26:28.400 23:08:06 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:26:28.400 23:08:06 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:26:28.400 23:08:06 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:26:28.400 [2024-12-09 23:08:06.510900] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:26:28.400 [2024-12-09 23:08:06.512410] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:26:28.400 [2024-12-09 23:08:06.512453] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:26:28.400 [2024-12-09 23:08:06.512465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.400 [2024-12-09 23:08:06.512483] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:26:28.400 [2024-12-09 23:08:06.512491] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:26:28.400 [2024-12-09 23:08:06.512500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.400 [2024-12-09 23:08:06.512508] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:26:28.400 [2024-12-09 23:08:06.512517] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:26:28.400 [2024-12-09 23:08:06.512524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.400 [2024-12-09 23:08:06.512532] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:26:28.400 [2024-12-09 23:08:06.512539] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:26:28.400 [2024-12-09 23:08:06.512547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.400 23:08:06 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:26:28.400 23:08:06 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:26:28.400 23:08:06 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:26:28.400 23:08:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:26:28.400 23:08:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:26:28.400 23:08:06 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:26:28.400 23:08:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:26:28.400 23:08:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:26:28.400 23:08:06 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:28.400 23:08:06 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:26:28.400 23:08:06 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:28.400 23:08:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:26:28.400 23:08:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:26:28.666 [2024-12-09 23:08:07.010915] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:26:28.666 [2024-12-09 23:08:07.011966] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:26:28.666 [2024-12-09 23:08:07.012001] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:26:28.666 [2024-12-09 23:08:07.012019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.666 [2024-12-09 23:08:07.012036] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:26:28.666 [2024-12-09 23:08:07.012046] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:26:28.666 [2024-12-09 23:08:07.012053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.666 [2024-12-09 23:08:07.012064] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:26:28.666 [2024-12-09 23:08:07.012071] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:26:28.666 [2024-12-09 23:08:07.012079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.666 [2024-12-09 23:08:07.012086] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:26:28.666 [2024-12-09 23:08:07.012095] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:26:28.666 [2024-12-09 23:08:07.012101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.666 23:08:07 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:26:28.666 23:08:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:26:28.666 23:08:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:26:28.666 23:08:07 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:26:28.666 23:08:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:26:28.666 23:08:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:26:28.666 23:08:07 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:28.666 23:08:07 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:26:28.666 23:08:07 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:28.666 23:08:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:26:28.666 23:08:07 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:26:28.928 23:08:07 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:26:28.928 23:08:07 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:26:28.928 23:08:07 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:26:28.928 23:08:07 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:26:28.928 23:08:07 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:26:28.928 23:08:07 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:26:28.928 23:08:07 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:26:28.928 23:08:07 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:26:28.928 23:08:07 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:26:28.928 23:08:07 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:26:28.928 23:08:07 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:26:41.212 23:08:19 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:26:41.212 23:08:19 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:26:41.212 23:08:19 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:26:41.212 23:08:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:26:41.212 23:08:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:26:41.212 23:08:19 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:26:41.212 23:08:19 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.212 23:08:19 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:26:41.212 23:08:19 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.212 23:08:19 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:26:41.212 23:08:19 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:26:41.212 23:08:19 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:26:41.212 23:08:19 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:26:41.212 23:08:19 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:26:41.212 23:08:19 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:26:41.212 23:08:19 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:26:41.212 23:08:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:26:41.212 [2024-12-09 23:08:19.411089] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:26:41.212 23:08:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:26:41.212 [2024-12-09 23:08:19.412373] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:26:41.213 [2024-12-09 23:08:19.412405] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:26:41.213 [2024-12-09 23:08:19.412417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.213 [2024-12-09 23:08:19.412435] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:26:41.213 [2024-12-09 23:08:19.412442] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:26:41.213 [2024-12-09 23:08:19.412451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.213 [2024-12-09 23:08:19.412459] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:26:41.213 [2024-12-09 23:08:19.412470] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:26:41.213 [2024-12-09 23:08:19.412477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.213 [2024-12-09 23:08:19.412486] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:26:41.213 [2024-12-09 23:08:19.412492] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:26:41.213 [2024-12-09 23:08:19.412501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.213 23:08:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:26:41.213 23:08:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:26:41.213 23:08:19 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:26:41.213 23:08:19 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.213 23:08:19 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:26:41.213 23:08:19 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.213 23:08:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:26:41.213 23:08:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:26:41.473 [2024-12-09 23:08:19.911092] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:26:41.473 [2024-12-09 23:08:19.913898] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:26:41.473 [2024-12-09 23:08:19.913940] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:26:41.473 [2024-12-09 23:08:19.913953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.473 [2024-12-09 23:08:19.913969] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:26:41.473 [2024-12-09 23:08:19.913978] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:26:41.473 [2024-12-09 23:08:19.913985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.473 [2024-12-09 23:08:19.913996] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:26:41.473 [2024-12-09 23:08:19.914003] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:26:41.473 [2024-12-09 23:08:19.914012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.473 [2024-12-09 23:08:19.914019] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:26:41.474 [2024-12-09 23:08:19.914030] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:26:41.474 [2024-12-09 23:08:19.914037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.734 23:08:19 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:26:41.734 23:08:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:26:41.734 23:08:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:26:41.734 23:08:19 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:26:41.734 23:08:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:26:41.734 23:08:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:26:41.734 23:08:19 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.734 23:08:19 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:26:41.734 23:08:19 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.734 23:08:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:26:41.734 23:08:19 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:26:41.734 23:08:20 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:26:41.734 23:08:20 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:26:41.734 23:08:20 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:26:41.734 23:08:20 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:26:41.734 23:08:20 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:26:41.734 23:08:20 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:26:41.734 23:08:20 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:26:41.734 23:08:20 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:26:41.995 23:08:20 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:26:41.995 23:08:20 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:26:41.995 23:08:20 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:26:54.305 23:08:32 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:26:54.305 23:08:32 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:26:54.305 23:08:32 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:26:54.305 23:08:32 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:26:54.305 23:08:32 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:26:54.305 23:08:32 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:26:54.305 23:08:32 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.305 23:08:32 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:26:54.305 23:08:32 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.305 23:08:32 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:26:54.305 23:08:32 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:26:54.305 23:08:32 sw_hotplug -- common/autotest_common.sh@719 -- # time=44.63 00:26:54.305 23:08:32 sw_hotplug -- common/autotest_common.sh@720 -- # echo 44.63 00:26:54.305 23:08:32 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:26:54.305 23:08:32 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=44.63 00:26:54.305 23:08:32 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 44.63 2 00:26:54.305 remove_attach_helper took 44.63s to complete (handling 2 nvme drive(s)) 23:08:32 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:26:54.305 23:08:32 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 67582 00:26:54.305 23:08:32 sw_hotplug -- common/autotest_common.sh@954 -- # '[' -z 67582 ']' 00:26:54.305 23:08:32 sw_hotplug -- common/autotest_common.sh@958 -- # kill -0 67582 00:26:54.305 23:08:32 sw_hotplug -- common/autotest_common.sh@959 -- # uname 00:26:54.305 23:08:32 sw_hotplug -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:54.305 23:08:32 sw_hotplug -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67582 00:26:54.305 23:08:32 sw_hotplug -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:54.305 23:08:32 sw_hotplug -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:54.305 killing process with pid 67582 00:26:54.305 23:08:32 sw_hotplug -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67582' 00:26:54.305 23:08:32 sw_hotplug -- common/autotest_common.sh@973 -- # kill 67582 00:26:54.305 23:08:32 sw_hotplug -- common/autotest_common.sh@978 -- # wait 67582 00:26:55.245 23:08:33 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:26:55.817 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:56.078 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:26:56.078 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:26:56.078 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:26:56.078 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:26:56.403 00:26:56.403 real 2m29.272s 00:26:56.403 user 1m51.631s 00:26:56.403 sys 0m16.444s 00:26:56.403 23:08:34 sw_hotplug -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:56.403 ************************************ 00:26:56.403 23:08:34 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:26:56.403 END TEST sw_hotplug 00:26:56.403 ************************************ 00:26:56.403 23:08:34 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:26:56.403 23:08:34 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:26:56.403 23:08:34 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:56.403 23:08:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:56.403 23:08:34 -- common/autotest_common.sh@10 -- # set +x 00:26:56.403 ************************************ 00:26:56.403 START TEST nvme_xnvme 00:26:56.403 ************************************ 00:26:56.403 23:08:34 nvme_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:26:56.403 * Looking for test storage... 00:26:56.403 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:26:56.403 23:08:34 nvme_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:56.403 23:08:34 nvme_xnvme -- common/autotest_common.sh@1711 -- # lcov --version 00:26:56.403 23:08:34 nvme_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:56.403 23:08:34 nvme_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:56.403 23:08:34 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:56.403 23:08:34 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:56.403 23:08:34 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:56.403 23:08:34 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:26:56.403 23:08:34 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:26:56.403 23:08:34 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:26:56.403 23:08:34 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:26:56.403 23:08:34 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:26:56.403 23:08:34 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:26:56.403 23:08:34 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:26:56.403 23:08:34 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:56.403 23:08:34 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:26:56.403 23:08:34 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:26:56.403 23:08:34 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:56.403 23:08:34 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:56.403 23:08:34 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:26:56.403 23:08:34 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:26:56.403 23:08:34 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:56.403 23:08:34 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:26:56.403 23:08:34 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:26:56.403 23:08:34 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:26:56.403 23:08:34 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:26:56.403 23:08:34 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:56.403 23:08:34 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:26:56.403 23:08:34 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:26:56.403 23:08:34 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:56.403 23:08:34 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:56.403 23:08:34 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:26:56.403 23:08:34 nvme_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:56.403 23:08:34 nvme_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:56.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:56.403 --rc genhtml_branch_coverage=1 00:26:56.403 --rc genhtml_function_coverage=1 00:26:56.403 --rc genhtml_legend=1 00:26:56.403 --rc geninfo_all_blocks=1 00:26:56.403 --rc geninfo_unexecuted_blocks=1 00:26:56.403 00:26:56.403 ' 00:26:56.403 23:08:34 nvme_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:56.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:56.403 --rc genhtml_branch_coverage=1 00:26:56.403 --rc genhtml_function_coverage=1 00:26:56.403 --rc genhtml_legend=1 00:26:56.403 --rc geninfo_all_blocks=1 00:26:56.403 --rc geninfo_unexecuted_blocks=1 00:26:56.403 00:26:56.403 ' 00:26:56.403 23:08:34 nvme_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:56.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:56.403 --rc genhtml_branch_coverage=1 00:26:56.403 --rc genhtml_function_coverage=1 00:26:56.403 --rc genhtml_legend=1 00:26:56.403 --rc geninfo_all_blocks=1 00:26:56.403 --rc geninfo_unexecuted_blocks=1 00:26:56.403 00:26:56.403 ' 00:26:56.403 23:08:34 nvme_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:56.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:56.403 --rc genhtml_branch_coverage=1 00:26:56.403 --rc genhtml_function_coverage=1 00:26:56.403 --rc genhtml_legend=1 00:26:56.403 --rc geninfo_all_blocks=1 00:26:56.403 --rc geninfo_unexecuted_blocks=1 00:26:56.403 00:26:56.403 ' 00:26:56.403 23:08:34 nvme_xnvme -- xnvme/common.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/dd/common.sh 00:26:56.403 23:08:34 nvme_xnvme -- dd/common.sh@6 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:26:56.403 23:08:34 nvme_xnvme -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:26:56.403 23:08:34 nvme_xnvme -- common/autotest_common.sh@34 -- # set -e 00:26:56.403 23:08:34 nvme_xnvme -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:26:56.403 23:08:34 nvme_xnvme -- common/autotest_common.sh@36 -- # shopt -s extglob 00:26:56.403 23:08:34 nvme_xnvme -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:26:56.403 23:08:34 nvme_xnvme -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:26:56.403 23:08:34 nvme_xnvme -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:26:56.403 23:08:34 nvme_xnvme -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:26:56.403 23:08:34 nvme_xnvme -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:26:56.403 23:08:34 nvme_xnvme -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:26:56.403 23:08:34 nvme_xnvme -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:26:56.403 23:08:34 nvme_xnvme -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:26:56.403 23:08:34 nvme_xnvme -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:26:56.403 23:08:34 nvme_xnvme -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:26:56.403 23:08:34 nvme_xnvme -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:26:56.403 23:08:34 nvme_xnvme -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:26:56.403 23:08:34 nvme_xnvme -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:26:56.403 23:08:34 nvme_xnvme -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:26:56.403 23:08:34 nvme_xnvme -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:26:56.404 23:08:34 nvme_xnvme -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:26:56.404 23:08:34 nvme_xnvme -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:26:56.404 23:08:34 nvme_xnvme -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:26:56.404 23:08:34 nvme_xnvme -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:26:56.404 23:08:34 nvme_xnvme -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:26:56.404 23:08:34 nvme_xnvme -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:26:56.404 23:08:34 nvme_xnvme -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:26:56.404 23:08:34 nvme_xnvme -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:26:56.404 23:08:34 nvme_xnvme -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:26:56.404 23:08:34 nvme_xnvme -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:26:56.404 23:08:34 nvme_xnvme -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:26:56.404 23:08:34 nvme_xnvme -- common/build_config.sh@23 -- # CONFIG_CET=n 00:26:56.404 23:08:34 nvme_xnvme -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:26:56.404 23:08:34 nvme_xnvme -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:26:56.404 23:08:34 nvme_xnvme -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:26:56.404 23:08:34 nvme_xnvme -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:26:56.404 23:08:34 nvme_xnvme -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:26:56.404 23:08:34 nvme_xnvme -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:26:56.404 23:08:34 nvme_xnvme -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:26:56.404 23:08:34 nvme_xnvme -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:26:56.404 23:08:34 nvme_xnvme -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:26:56.404 23:08:34 nvme_xnvme -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:26:56.404 23:08:34 nvme_xnvme -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:26:56.404 23:08:34 nvme_xnvme -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:26:56.404 23:08:34 nvme_xnvme -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:26:56.404 23:08:34 nvme_xnvme -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:26:56.404 23:08:34 nvme_xnvme -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:26:56.404 23:08:34 nvme_xnvme -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:26:56.404 23:08:34 nvme_xnvme -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:26:56.404 23:08:34 nvme_xnvme -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:26:56.404 23:08:34 nvme_xnvme -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:26:56.404 23:08:34 nvme_xnvme -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:26:56.404 23:08:34 nvme_xnvme -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:26:56.404 23:08:34 nvme_xnvme -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:26:56.404 23:08:34 nvme_xnvme -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:26:56.404 23:08:34 nvme_xnvme -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:26:56.404 23:08:34 nvme_xnvme -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:26:56.404 23:08:34 nvme_xnvme -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:26:56.404 23:08:34 nvme_xnvme -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:26:56.404 23:08:34 nvme_xnvme -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:26:56.404 23:08:34 nvme_xnvme -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:26:56.404 23:08:34 nvme_xnvme -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:26:56.404 23:08:34 nvme_xnvme -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:26:56.404 23:08:34 nvme_xnvme -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:26:56.404 23:08:34 nvme_xnvme -- common/build_config.sh@56 -- # CONFIG_XNVME=y 00:26:56.404 23:08:34 nvme_xnvme -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:26:56.404 23:08:34 nvme_xnvme -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:26:56.404 23:08:34 nvme_xnvme -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:26:56.404 23:08:34 nvme_xnvme -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:26:56.404 23:08:34 nvme_xnvme -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:26:56.404 23:08:34 nvme_xnvme -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:26:56.404 23:08:34 nvme_xnvme -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:26:56.404 23:08:34 nvme_xnvme -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:26:56.404 23:08:34 nvme_xnvme -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:26:56.404 23:08:34 nvme_xnvme -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:26:56.404 23:08:34 nvme_xnvme -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:26:56.404 23:08:34 nvme_xnvme -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:26:56.404 23:08:34 nvme_xnvme -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:26:56.404 23:08:34 nvme_xnvme -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:26:56.404 23:08:34 nvme_xnvme -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:26:56.404 23:08:34 nvme_xnvme -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:26:56.404 23:08:34 nvme_xnvme -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:26:56.404 23:08:34 nvme_xnvme -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:26:56.404 23:08:34 nvme_xnvme -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:26:56.404 23:08:34 nvme_xnvme -- common/build_config.sh@76 -- # CONFIG_FC=n 00:26:56.404 23:08:34 nvme_xnvme -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:26:56.404 23:08:34 nvme_xnvme -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:26:56.404 23:08:34 nvme_xnvme -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:26:56.404 23:08:34 nvme_xnvme -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:26:56.404 23:08:34 nvme_xnvme -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:26:56.404 23:08:34 nvme_xnvme -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:26:56.404 23:08:34 nvme_xnvme -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:26:56.404 23:08:34 nvme_xnvme -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:26:56.404 23:08:34 nvme_xnvme -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:26:56.404 23:08:34 nvme_xnvme -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:26:56.404 23:08:34 nvme_xnvme -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:26:56.404 23:08:34 nvme_xnvme -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:26:56.404 23:08:34 nvme_xnvme -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:26:56.404 23:08:34 nvme_xnvme -- common/build_config.sh@90 -- # CONFIG_URING=n 00:26:56.404 23:08:34 nvme_xnvme -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:26:56.404 23:08:34 nvme_xnvme -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:26:56.404 23:08:34 nvme_xnvme -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:26:56.404 23:08:34 nvme_xnvme -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:26:56.404 23:08:34 nvme_xnvme -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:26:56.404 23:08:34 nvme_xnvme -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:26:56.404 23:08:34 nvme_xnvme -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:26:56.404 23:08:34 nvme_xnvme -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:26:56.404 23:08:34 nvme_xnvme -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:26:56.404 23:08:34 nvme_xnvme -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:26:56.404 23:08:34 nvme_xnvme -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:26:56.404 23:08:34 nvme_xnvme -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:26:56.404 23:08:34 nvme_xnvme -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:26:56.404 23:08:34 nvme_xnvme -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:26:56.404 23:08:34 nvme_xnvme -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:26:56.404 23:08:34 nvme_xnvme -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:26:56.404 #define SPDK_CONFIG_H 00:26:56.404 #define SPDK_CONFIG_AIO_FSDEV 1 00:26:56.404 #define SPDK_CONFIG_APPS 1 00:26:56.404 #define SPDK_CONFIG_ARCH native 00:26:56.404 #define SPDK_CONFIG_ASAN 1 00:26:56.404 #undef SPDK_CONFIG_AVAHI 00:26:56.404 #undef SPDK_CONFIG_CET 00:26:56.404 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:26:56.404 #define SPDK_CONFIG_COVERAGE 1 00:26:56.404 #define SPDK_CONFIG_CROSS_PREFIX 00:26:56.404 #undef SPDK_CONFIG_CRYPTO 00:26:56.404 #undef SPDK_CONFIG_CRYPTO_MLX5 00:26:56.404 #undef SPDK_CONFIG_CUSTOMOCF 00:26:56.404 #undef SPDK_CONFIG_DAOS 00:26:56.404 #define SPDK_CONFIG_DAOS_DIR 00:26:56.404 #define SPDK_CONFIG_DEBUG 1 00:26:56.404 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:26:56.404 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:26:56.404 #define SPDK_CONFIG_DPDK_INC_DIR 00:26:56.404 #define SPDK_CONFIG_DPDK_LIB_DIR 00:26:56.404 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:26:56.404 #undef SPDK_CONFIG_DPDK_UADK 00:26:56.404 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:26:56.404 #define SPDK_CONFIG_EXAMPLES 1 00:26:56.404 #undef SPDK_CONFIG_FC 00:26:56.404 #define SPDK_CONFIG_FC_PATH 00:26:56.404 #define SPDK_CONFIG_FIO_PLUGIN 1 00:26:56.404 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:26:56.404 #define SPDK_CONFIG_FSDEV 1 00:26:56.404 #undef SPDK_CONFIG_FUSE 00:26:56.404 #undef SPDK_CONFIG_FUZZER 00:26:56.404 #define SPDK_CONFIG_FUZZER_LIB 00:26:56.404 #undef SPDK_CONFIG_GOLANG 00:26:56.404 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:26:56.404 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:26:56.404 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:26:56.404 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:26:56.404 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:26:56.404 #undef SPDK_CONFIG_HAVE_LIBBSD 00:26:56.404 #undef SPDK_CONFIG_HAVE_LZ4 00:26:56.404 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:26:56.404 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:26:56.404 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:26:56.404 #define SPDK_CONFIG_IDXD 1 00:26:56.404 #define SPDK_CONFIG_IDXD_KERNEL 1 00:26:56.404 #undef SPDK_CONFIG_IPSEC_MB 00:26:56.404 #define SPDK_CONFIG_IPSEC_MB_DIR 00:26:56.404 #define SPDK_CONFIG_ISAL 1 00:26:56.404 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:26:56.404 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:26:56.404 #define SPDK_CONFIG_LIBDIR 00:26:56.404 #undef SPDK_CONFIG_LTO 00:26:56.404 #define SPDK_CONFIG_MAX_LCORES 128 00:26:56.404 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:26:56.404 #define SPDK_CONFIG_NVME_CUSE 1 00:26:56.404 #undef SPDK_CONFIG_OCF 00:26:56.404 #define SPDK_CONFIG_OCF_PATH 00:26:56.404 #define SPDK_CONFIG_OPENSSL_PATH 00:26:56.404 #undef SPDK_CONFIG_PGO_CAPTURE 00:26:56.404 #define SPDK_CONFIG_PGO_DIR 00:26:56.404 #undef SPDK_CONFIG_PGO_USE 00:26:56.404 #define SPDK_CONFIG_PREFIX /usr/local 00:26:56.404 #undef SPDK_CONFIG_RAID5F 00:26:56.404 #undef SPDK_CONFIG_RBD 00:26:56.404 #define SPDK_CONFIG_RDMA 1 00:26:56.404 #define SPDK_CONFIG_RDMA_PROV verbs 00:26:56.404 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:26:56.405 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:26:56.405 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:26:56.405 #define SPDK_CONFIG_SHARED 1 00:26:56.405 #undef SPDK_CONFIG_SMA 00:26:56.405 #define SPDK_CONFIG_TESTS 1 00:26:56.405 #undef SPDK_CONFIG_TSAN 00:26:56.405 #define SPDK_CONFIG_UBLK 1 00:26:56.405 #define SPDK_CONFIG_UBSAN 1 00:26:56.405 #undef SPDK_CONFIG_UNIT_TESTS 00:26:56.405 #undef SPDK_CONFIG_URING 00:26:56.405 #define SPDK_CONFIG_URING_PATH 00:26:56.405 #undef SPDK_CONFIG_URING_ZNS 00:26:56.405 #undef SPDK_CONFIG_USDT 00:26:56.405 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:26:56.405 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:26:56.405 #undef SPDK_CONFIG_VFIO_USER 00:26:56.405 #define SPDK_CONFIG_VFIO_USER_DIR 00:26:56.405 #define SPDK_CONFIG_VHOST 1 00:26:56.405 #define SPDK_CONFIG_VIRTIO 1 00:26:56.405 #undef SPDK_CONFIG_VTUNE 00:26:56.405 #define SPDK_CONFIG_VTUNE_DIR 00:26:56.405 #define SPDK_CONFIG_WERROR 1 00:26:56.405 #define SPDK_CONFIG_WPDK_DIR 00:26:56.405 #define SPDK_CONFIG_XNVME 1 00:26:56.405 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:26:56.405 23:08:34 nvme_xnvme -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:26:56.405 23:08:34 nvme_xnvme -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:56.405 23:08:34 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:26:56.405 23:08:34 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:56.405 23:08:34 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:56.405 23:08:34 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:56.405 23:08:34 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:56.405 23:08:34 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:56.405 23:08:34 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:56.405 23:08:34 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:26:56.405 23:08:34 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:56.405 23:08:34 nvme_xnvme -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:26:56.405 23:08:34 nvme_xnvme -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:26:56.405 23:08:34 nvme_xnvme -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:26:56.405 23:08:34 nvme_xnvme -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:26:56.405 23:08:34 nvme_xnvme -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:26:56.405 23:08:34 nvme_xnvme -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:26:56.405 23:08:34 nvme_xnvme -- pm/common@64 -- # TEST_TAG=N/A 00:26:56.405 23:08:34 nvme_xnvme -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:26:56.405 23:08:34 nvme_xnvme -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:26:56.405 23:08:34 nvme_xnvme -- pm/common@68 -- # uname -s 00:26:56.405 23:08:34 nvme_xnvme -- pm/common@68 -- # PM_OS=Linux 00:26:56.405 23:08:34 nvme_xnvme -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:26:56.405 23:08:34 nvme_xnvme -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:26:56.405 23:08:34 nvme_xnvme -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:26:56.405 23:08:34 nvme_xnvme -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:26:56.405 23:08:34 nvme_xnvme -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:26:56.405 23:08:34 nvme_xnvme -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:26:56.405 23:08:34 nvme_xnvme -- pm/common@76 -- # SUDO[0]= 00:26:56.405 23:08:34 nvme_xnvme -- pm/common@76 -- # SUDO[1]='sudo -E' 00:26:56.405 23:08:34 nvme_xnvme -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:26:56.405 23:08:34 nvme_xnvme -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:26:56.405 23:08:34 nvme_xnvme -- pm/common@81 -- # [[ Linux == Linux ]] 00:26:56.405 23:08:34 nvme_xnvme -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:26:56.405 23:08:34 nvme_xnvme -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:26:56.405 23:08:34 nvme_xnvme -- common/autotest_common.sh@58 -- # : 0 00:26:56.405 23:08:34 nvme_xnvme -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:26:56.405 23:08:34 nvme_xnvme -- common/autotest_common.sh@62 -- # : 0 00:26:56.405 23:08:34 nvme_xnvme -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:26:56.405 23:08:34 nvme_xnvme -- common/autotest_common.sh@64 -- # : 0 00:26:56.405 23:08:34 nvme_xnvme -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:26:56.405 23:08:34 nvme_xnvme -- common/autotest_common.sh@66 -- # : 1 00:26:56.405 23:08:34 nvme_xnvme -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:26:56.405 23:08:34 nvme_xnvme -- common/autotest_common.sh@68 -- # : 0 00:26:56.405 23:08:34 nvme_xnvme -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:26:56.405 23:08:34 nvme_xnvme -- common/autotest_common.sh@70 -- # : 00:26:56.405 23:08:34 nvme_xnvme -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:26:56.405 23:08:34 nvme_xnvme -- common/autotest_common.sh@72 -- # : 0 00:26:56.405 23:08:34 nvme_xnvme -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:26:56.405 23:08:34 nvme_xnvme -- common/autotest_common.sh@74 -- # : 1 00:26:56.405 23:08:34 nvme_xnvme -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:26:56.405 23:08:34 nvme_xnvme -- common/autotest_common.sh@76 -- # : 0 00:26:56.405 23:08:34 nvme_xnvme -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:26:56.405 23:08:34 nvme_xnvme -- common/autotest_common.sh@78 -- # : 0 00:26:56.405 23:08:34 nvme_xnvme -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:26:56.405 23:08:34 nvme_xnvme -- common/autotest_common.sh@80 -- # : 1 00:26:56.405 23:08:34 nvme_xnvme -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:26:56.405 23:08:34 nvme_xnvme -- common/autotest_common.sh@82 -- # : 0 00:26:56.405 23:08:34 nvme_xnvme -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:26:56.405 23:08:34 nvme_xnvme -- common/autotest_common.sh@84 -- # : 0 00:26:56.405 23:08:34 nvme_xnvme -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:26:56.405 23:08:34 nvme_xnvme -- common/autotest_common.sh@86 -- # : 0 00:26:56.405 23:08:34 nvme_xnvme -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:26:56.405 23:08:34 nvme_xnvme -- common/autotest_common.sh@88 -- # : 0 00:26:56.405 23:08:34 nvme_xnvme -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:26:56.405 23:08:34 nvme_xnvme -- common/autotest_common.sh@90 -- # : 1 00:26:56.405 23:08:34 nvme_xnvme -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:26:56.405 23:08:34 nvme_xnvme -- common/autotest_common.sh@92 -- # : 0 00:26:56.405 23:08:34 nvme_xnvme -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:26:56.405 23:08:34 nvme_xnvme -- common/autotest_common.sh@94 -- # : 0 00:26:56.405 23:08:34 nvme_xnvme -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:26:56.405 23:08:34 nvme_xnvme -- common/autotest_common.sh@96 -- # : 0 00:26:56.405 23:08:34 nvme_xnvme -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:26:56.405 23:08:34 nvme_xnvme -- common/autotest_common.sh@98 -- # : 0 00:26:56.405 23:08:34 nvme_xnvme -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:26:56.405 23:08:34 nvme_xnvme -- common/autotest_common.sh@100 -- # : 0 00:26:56.405 23:08:34 nvme_xnvme -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:26:56.405 23:08:34 nvme_xnvme -- common/autotest_common.sh@102 -- # : rdma 00:26:56.405 23:08:34 nvme_xnvme -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:26:56.405 23:08:34 nvme_xnvme -- common/autotest_common.sh@104 -- # : 0 00:26:56.405 23:08:34 nvme_xnvme -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:26:56.405 23:08:34 nvme_xnvme -- common/autotest_common.sh@106 -- # : 0 00:26:56.405 23:08:34 nvme_xnvme -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:26:56.405 23:08:34 nvme_xnvme -- common/autotest_common.sh@108 -- # : 0 00:26:56.405 23:08:34 nvme_xnvme -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:26:56.405 23:08:34 nvme_xnvme -- common/autotest_common.sh@110 -- # : 0 00:26:56.405 23:08:34 nvme_xnvme -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:26:56.405 23:08:34 nvme_xnvme -- common/autotest_common.sh@112 -- # : 0 00:26:56.405 23:08:34 nvme_xnvme -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:26:56.405 23:08:34 nvme_xnvme -- common/autotest_common.sh@114 -- # : 0 00:26:56.405 23:08:34 nvme_xnvme -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:26:56.405 23:08:34 nvme_xnvme -- common/autotest_common.sh@116 -- # : 0 00:26:56.405 23:08:34 nvme_xnvme -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:26:56.405 23:08:34 nvme_xnvme -- common/autotest_common.sh@118 -- # : 0 00:26:56.405 23:08:34 nvme_xnvme -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:26:56.405 23:08:34 nvme_xnvme -- common/autotest_common.sh@120 -- # : 0 00:26:56.405 23:08:34 nvme_xnvme -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:26:56.405 23:08:34 nvme_xnvme -- common/autotest_common.sh@122 -- # : 1 00:26:56.405 23:08:34 nvme_xnvme -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:26:56.405 23:08:34 nvme_xnvme -- common/autotest_common.sh@124 -- # : 1 00:26:56.405 23:08:34 nvme_xnvme -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:26:56.405 23:08:34 nvme_xnvme -- common/autotest_common.sh@126 -- # : 00:26:56.405 23:08:34 nvme_xnvme -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:26:56.405 23:08:34 nvme_xnvme -- common/autotest_common.sh@128 -- # : 0 00:26:56.405 23:08:34 nvme_xnvme -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:26:56.405 23:08:34 nvme_xnvme -- common/autotest_common.sh@130 -- # : 0 00:26:56.405 23:08:34 nvme_xnvme -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:26:56.405 23:08:34 nvme_xnvme -- common/autotest_common.sh@132 -- # : 1 00:26:56.405 23:08:34 nvme_xnvme -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:26:56.405 23:08:34 nvme_xnvme -- common/autotest_common.sh@134 -- # : 0 00:26:56.406 23:08:34 nvme_xnvme -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:26:56.406 23:08:34 nvme_xnvme -- common/autotest_common.sh@136 -- # : 0 00:26:56.406 23:08:34 nvme_xnvme -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:26:56.406 23:08:34 nvme_xnvme -- common/autotest_common.sh@138 -- # : 0 00:26:56.406 23:08:34 nvme_xnvme -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:26:56.406 23:08:34 nvme_xnvme -- common/autotest_common.sh@140 -- # : 00:26:56.406 23:08:34 nvme_xnvme -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:26:56.406 23:08:34 nvme_xnvme -- common/autotest_common.sh@142 -- # : true 00:26:56.406 23:08:34 nvme_xnvme -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:26:56.406 23:08:34 nvme_xnvme -- common/autotest_common.sh@144 -- # : 0 00:26:56.406 23:08:34 nvme_xnvme -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:26:56.406 23:08:34 nvme_xnvme -- common/autotest_common.sh@146 -- # : 0 00:26:56.406 23:08:34 nvme_xnvme -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:26:56.406 23:08:34 nvme_xnvme -- common/autotest_common.sh@148 -- # : 0 00:26:56.406 23:08:34 nvme_xnvme -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:26:56.406 23:08:34 nvme_xnvme -- common/autotest_common.sh@150 -- # : 0 00:26:56.406 23:08:34 nvme_xnvme -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:26:56.406 23:08:34 nvme_xnvme -- common/autotest_common.sh@152 -- # : 0 00:26:56.406 23:08:34 nvme_xnvme -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:26:56.406 23:08:34 nvme_xnvme -- common/autotest_common.sh@154 -- # : 00:26:56.406 23:08:34 nvme_xnvme -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:26:56.406 23:08:34 nvme_xnvme -- common/autotest_common.sh@156 -- # : 0 00:26:56.406 23:08:34 nvme_xnvme -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:26:56.406 23:08:34 nvme_xnvme -- common/autotest_common.sh@158 -- # : 0 00:26:56.406 23:08:34 nvme_xnvme -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:26:56.406 23:08:34 nvme_xnvme -- common/autotest_common.sh@160 -- # : 1 00:26:56.406 23:08:34 nvme_xnvme -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:26:56.406 23:08:34 nvme_xnvme -- common/autotest_common.sh@162 -- # : 0 00:26:56.406 23:08:34 nvme_xnvme -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:26:56.406 23:08:34 nvme_xnvme -- common/autotest_common.sh@164 -- # : 0 00:26:56.406 23:08:34 nvme_xnvme -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:26:56.406 23:08:34 nvme_xnvme -- common/autotest_common.sh@166 -- # : 0 00:26:56.406 23:08:34 nvme_xnvme -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:26:56.406 23:08:34 nvme_xnvme -- common/autotest_common.sh@169 -- # : 00:26:56.406 23:08:34 nvme_xnvme -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:26:56.406 23:08:34 nvme_xnvme -- common/autotest_common.sh@171 -- # : 0 00:26:56.406 23:08:34 nvme_xnvme -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:26:56.406 23:08:34 nvme_xnvme -- common/autotest_common.sh@173 -- # : 0 00:26:56.406 23:08:34 nvme_xnvme -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:26:56.406 23:08:34 nvme_xnvme -- common/autotest_common.sh@175 -- # : 0 00:26:56.406 23:08:34 nvme_xnvme -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:26:56.406 23:08:34 nvme_xnvme -- common/autotest_common.sh@177 -- # : 0 00:26:56.406 23:08:34 nvme_xnvme -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:26:56.406 23:08:34 nvme_xnvme -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:26:56.406 23:08:34 nvme_xnvme -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:26:56.406 23:08:34 nvme_xnvme -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:26:56.406 23:08:34 nvme_xnvme -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:26:56.406 23:08:34 nvme_xnvme -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:26:56.406 23:08:34 nvme_xnvme -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:26:56.406 23:08:34 nvme_xnvme -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:26:56.406 23:08:34 nvme_xnvme -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:26:56.406 23:08:34 nvme_xnvme -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:26:56.406 23:08:34 nvme_xnvme -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:26:56.406 23:08:34 nvme_xnvme -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:26:56.406 23:08:34 nvme_xnvme -- common/autotest_common.sh@191 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:26:56.406 23:08:34 nvme_xnvme -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:26:56.406 23:08:34 nvme_xnvme -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:26:56.406 23:08:34 nvme_xnvme -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:26:56.406 23:08:34 nvme_xnvme -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:26:56.406 23:08:34 nvme_xnvme -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:26:56.406 23:08:34 nvme_xnvme -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:26:56.406 23:08:34 nvme_xnvme -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:26:56.406 23:08:34 nvme_xnvme -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:26:56.406 23:08:34 nvme_xnvme -- common/autotest_common.sh@206 -- # cat 00:26:56.406 23:08:34 nvme_xnvme -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:26:56.406 23:08:34 nvme_xnvme -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:26:56.406 23:08:34 nvme_xnvme -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:26:56.406 23:08:34 nvme_xnvme -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:26:56.406 23:08:34 nvme_xnvme -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:26:56.406 23:08:34 nvme_xnvme -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:26:56.406 23:08:34 nvme_xnvme -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:26:56.406 23:08:34 nvme_xnvme -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:26:56.406 23:08:34 nvme_xnvme -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:26:56.406 23:08:34 nvme_xnvme -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:26:56.406 23:08:34 nvme_xnvme -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:26:56.406 23:08:34 nvme_xnvme -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:26:56.406 23:08:34 nvme_xnvme -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:26:56.406 23:08:34 nvme_xnvme -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:26:56.406 23:08:34 nvme_xnvme -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:26:56.406 23:08:34 nvme_xnvme -- common/autotest_common.sh@262 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:26:56.406 23:08:34 nvme_xnvme -- common/autotest_common.sh@262 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:26:56.406 23:08:34 nvme_xnvme -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:26:56.406 23:08:34 nvme_xnvme -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:26:56.406 23:08:34 nvme_xnvme -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:26:56.406 23:08:34 nvme_xnvme -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:26:56.406 23:08:34 nvme_xnvme -- common/autotest_common.sh@269 -- # _LCOV= 00:26:56.406 23:08:34 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:26:56.406 23:08:34 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:26:56.406 23:08:34 nvme_xnvme -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:26:56.406 23:08:34 nvme_xnvme -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:26:56.406 23:08:34 nvme_xnvme -- common/autotest_common.sh@275 -- # lcov_opt= 00:26:56.406 23:08:34 nvme_xnvme -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:26:56.406 23:08:34 nvme_xnvme -- common/autotest_common.sh@279 -- # export valgrind= 00:26:56.406 23:08:34 nvme_xnvme -- common/autotest_common.sh@279 -- # valgrind= 00:26:56.406 23:08:34 nvme_xnvme -- common/autotest_common.sh@285 -- # uname -s 00:26:56.406 23:08:34 nvme_xnvme -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:26:56.406 23:08:34 nvme_xnvme -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:26:56.406 23:08:34 nvme_xnvme -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:26:56.406 23:08:34 nvme_xnvme -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:26:56.406 23:08:34 nvme_xnvme -- common/autotest_common.sh@289 -- # MAKE=make 00:26:56.406 23:08:34 nvme_xnvme -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j10 00:26:56.406 23:08:34 nvme_xnvme -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:26:56.406 23:08:34 nvme_xnvme -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:26:56.406 23:08:34 nvme_xnvme -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:26:56.406 23:08:34 nvme_xnvme -- common/autotest_common.sh@309 -- # TEST_MODE= 00:26:56.406 23:08:34 nvme_xnvme -- common/autotest_common.sh@331 -- # [[ -z 68936 ]] 00:26:56.406 23:08:34 nvme_xnvme -- common/autotest_common.sh@331 -- # kill -0 68936 00:26:56.406 23:08:34 nvme_xnvme -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:26:56.406 23:08:34 nvme_xnvme -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:26:56.406 23:08:34 nvme_xnvme -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:26:56.406 23:08:34 nvme_xnvme -- common/autotest_common.sh@344 -- # local mount target_dir 00:26:56.406 23:08:34 nvme_xnvme -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:26:56.406 23:08:34 nvme_xnvme -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:26:56.406 23:08:34 nvme_xnvme -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:26:56.406 23:08:34 nvme_xnvme -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:26:56.406 23:08:34 nvme_xnvme -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.XOw72L 00:26:56.407 23:08:34 nvme_xnvme -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:26:56.407 23:08:34 nvme_xnvme -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:26:56.407 23:08:34 nvme_xnvme -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:26:56.407 23:08:34 nvme_xnvme -- common/autotest_common.sh@368 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvme/xnvme /tmp/spdk.XOw72L/tests/xnvme /tmp/spdk.XOw72L 00:26:56.407 23:08:34 nvme_xnvme -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:26:56.407 23:08:34 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:26:56.407 23:08:34 nvme_xnvme -- common/autotest_common.sh@340 -- # df -T 00:26:56.407 23:08:34 nvme_xnvme -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:26:56.407 23:08:34 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:26:56.407 23:08:34 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:26:56.407 23:08:34 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13976096768 00:26:56.407 23:08:34 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:26:56.407 23:08:34 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5591330816 00:26:56.407 23:08:34 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:26:56.407 23:08:34 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=devtmpfs 00:26:56.407 23:08:34 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:26:56.407 23:08:34 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=4194304 00:26:56.407 23:08:34 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=4194304 00:26:56.407 23:08:34 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:26:56.407 23:08:34 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:26:56.407 23:08:34 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:26:56.407 23:08:34 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:26:56.407 23:08:34 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6260625408 00:26:56.407 23:08:34 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6265389056 00:26:56.407 23:08:34 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=4763648 00:26:56.407 23:08:34 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:26:56.407 23:08:34 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:26:56.407 23:08:34 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:26:56.407 23:08:34 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=2493362176 00:26:56.407 23:08:34 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=2506158080 00:26:56.407 23:08:34 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12795904 00:26:56.407 23:08:34 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:26:56.407 23:08:34 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:26:56.407 23:08:34 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:26:56.407 23:08:34 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13976096768 00:26:56.407 23:08:34 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:26:56.407 23:08:34 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5591330816 00:26:56.407 23:08:34 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:26:56.407 23:08:34 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:26:56.407 23:08:34 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:26:56.407 23:08:34 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6265237504 00:26:56.407 23:08:34 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6265389056 00:26:56.407 23:08:34 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=151552 00:26:56.407 23:08:34 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:26:56.407 23:08:34 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda2 00:26:56.407 23:08:34 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=ext4 00:26:56.407 23:08:34 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=840085504 00:26:56.407 23:08:34 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1012768768 00:26:56.407 23:08:34 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=103477248 00:26:56.407 23:08:34 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:26:56.407 23:08:34 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda3 00:26:56.407 23:08:34 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=vfat 00:26:56.407 23:08:34 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=91617280 00:26:56.407 23:08:34 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=104607744 00:26:56.407 23:08:34 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12990464 00:26:56.407 23:08:34 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:26:56.407 23:08:34 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:26:56.407 23:08:34 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:26:56.407 23:08:34 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=1253064704 00:26:56.407 23:08:34 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1253076992 00:26:56.407 23:08:34 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:26:56.407 23:08:34 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:26:56.407 23:08:34 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest_3/fedora39-libvirt/output 00:26:56.407 23:08:34 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=fuse.sshfs 00:26:56.407 23:08:34 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=91147608064 00:26:56.407 23:08:34 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=105088212992 00:26:56.407 23:08:34 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=8555171840 00:26:56.407 23:08:34 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:26:56.407 23:08:34 nvme_xnvme -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:26:56.407 * Looking for test storage... 00:26:56.407 23:08:34 nvme_xnvme -- common/autotest_common.sh@381 -- # local target_space new_size 00:26:56.407 23:08:34 nvme_xnvme -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:26:56.407 23:08:34 nvme_xnvme -- common/autotest_common.sh@385 -- # df /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:26:56.407 23:08:34 nvme_xnvme -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:26:56.407 23:08:34 nvme_xnvme -- common/autotest_common.sh@385 -- # mount=/home 00:26:56.407 23:08:34 nvme_xnvme -- common/autotest_common.sh@387 -- # target_space=13976096768 00:26:56.407 23:08:34 nvme_xnvme -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:26:56.407 23:08:34 nvme_xnvme -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:26:56.407 23:08:34 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == tmpfs ]] 00:26:56.407 23:08:34 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == ramfs ]] 00:26:56.407 23:08:34 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ /home == / ]] 00:26:56.407 23:08:34 nvme_xnvme -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:26:56.407 23:08:34 nvme_xnvme -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:26:56.407 23:08:34 nvme_xnvme -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:26:56.407 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:26:56.407 23:08:34 nvme_xnvme -- common/autotest_common.sh@402 -- # return 0 00:26:56.407 23:08:34 nvme_xnvme -- common/autotest_common.sh@1698 -- # set -o errtrace 00:26:56.407 23:08:34 nvme_xnvme -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:26:56.407 23:08:34 nvme_xnvme -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:26:56.407 23:08:34 nvme_xnvme -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:26:56.407 23:08:34 nvme_xnvme -- common/autotest_common.sh@1703 -- # true 00:26:56.407 23:08:34 nvme_xnvme -- common/autotest_common.sh@1705 -- # xtrace_fd 00:26:56.407 23:08:34 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:26:56.407 23:08:34 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:26:56.407 23:08:34 nvme_xnvme -- common/autotest_common.sh@27 -- # exec 00:26:56.407 23:08:34 nvme_xnvme -- common/autotest_common.sh@29 -- # exec 00:26:56.407 23:08:34 nvme_xnvme -- common/autotest_common.sh@31 -- # xtrace_restore 00:26:56.407 23:08:34 nvme_xnvme -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:26:56.407 23:08:34 nvme_xnvme -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:26:56.407 23:08:34 nvme_xnvme -- common/autotest_common.sh@18 -- # set -x 00:26:56.407 23:08:34 nvme_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:56.407 23:08:34 nvme_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:56.407 23:08:34 nvme_xnvme -- common/autotest_common.sh@1711 -- # lcov --version 00:26:56.670 23:08:34 nvme_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:56.670 23:08:34 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:56.670 23:08:34 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:56.670 23:08:34 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:56.670 23:08:34 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:26:56.670 23:08:34 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:26:56.670 23:08:34 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:26:56.670 23:08:34 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:26:56.670 23:08:34 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:26:56.670 23:08:34 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:26:56.670 23:08:34 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:26:56.670 23:08:34 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:56.670 23:08:34 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:26:56.670 23:08:34 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:26:56.670 23:08:34 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:56.670 23:08:34 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:56.670 23:08:34 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:26:56.670 23:08:34 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:26:56.670 23:08:34 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:56.670 23:08:34 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:26:56.670 23:08:34 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:26:56.670 23:08:34 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:26:56.670 23:08:34 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:26:56.670 23:08:34 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:56.670 23:08:34 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:26:56.670 23:08:34 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:26:56.670 23:08:34 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:56.670 23:08:34 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:56.670 23:08:34 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:26:56.670 23:08:34 nvme_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:56.670 23:08:34 nvme_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:56.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:56.670 --rc genhtml_branch_coverage=1 00:26:56.670 --rc genhtml_function_coverage=1 00:26:56.670 --rc genhtml_legend=1 00:26:56.670 --rc geninfo_all_blocks=1 00:26:56.670 --rc geninfo_unexecuted_blocks=1 00:26:56.670 00:26:56.670 ' 00:26:56.670 23:08:34 nvme_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:56.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:56.670 --rc genhtml_branch_coverage=1 00:26:56.670 --rc genhtml_function_coverage=1 00:26:56.670 --rc genhtml_legend=1 00:26:56.670 --rc geninfo_all_blocks=1 00:26:56.670 --rc geninfo_unexecuted_blocks=1 00:26:56.670 00:26:56.670 ' 00:26:56.670 23:08:34 nvme_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:56.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:56.670 --rc genhtml_branch_coverage=1 00:26:56.670 --rc genhtml_function_coverage=1 00:26:56.670 --rc genhtml_legend=1 00:26:56.670 --rc geninfo_all_blocks=1 00:26:56.670 --rc geninfo_unexecuted_blocks=1 00:26:56.670 00:26:56.670 ' 00:26:56.670 23:08:34 nvme_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:56.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:56.670 --rc genhtml_branch_coverage=1 00:26:56.670 --rc genhtml_function_coverage=1 00:26:56.670 --rc genhtml_legend=1 00:26:56.670 --rc geninfo_all_blocks=1 00:26:56.670 --rc geninfo_unexecuted_blocks=1 00:26:56.670 00:26:56.670 ' 00:26:56.670 23:08:34 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:56.670 23:08:34 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:26:56.670 23:08:34 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:56.670 23:08:34 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:56.670 23:08:34 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:56.670 23:08:34 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:56.670 23:08:34 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:56.670 23:08:34 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:56.670 23:08:34 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:26:56.670 23:08:34 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:56.670 23:08:34 nvme_xnvme -- xnvme/common.sh@12 -- # xnvme_io=('libaio' 'io_uring' 'io_uring_cmd') 00:26:56.670 23:08:34 nvme_xnvme -- xnvme/common.sh@12 -- # declare -a xnvme_io 00:26:56.670 23:08:34 nvme_xnvme -- xnvme/common.sh@18 -- # libaio=('randread' 'randwrite') 00:26:56.670 23:08:34 nvme_xnvme -- xnvme/common.sh@18 -- # declare -a libaio 00:26:56.670 23:08:34 nvme_xnvme -- xnvme/common.sh@23 -- # io_uring=('randread' 'randwrite') 00:26:56.670 23:08:34 nvme_xnvme -- xnvme/common.sh@23 -- # declare -a io_uring 00:26:56.670 23:08:34 nvme_xnvme -- xnvme/common.sh@27 -- # io_uring_cmd=('randread' 'randwrite' 'unmap' 'write_zeroes') 00:26:56.670 23:08:34 nvme_xnvme -- xnvme/common.sh@27 -- # declare -a io_uring_cmd 00:26:56.670 23:08:34 nvme_xnvme -- xnvme/common.sh@33 -- # libaio_fio=('randread' 'randwrite') 00:26:56.670 23:08:34 nvme_xnvme -- xnvme/common.sh@33 -- # declare -a libaio_fio 00:26:56.670 23:08:34 nvme_xnvme -- xnvme/common.sh@37 -- # io_uring_fio=('randread' 'randwrite') 00:26:56.670 23:08:34 nvme_xnvme -- xnvme/common.sh@37 -- # declare -a io_uring_fio 00:26:56.670 23:08:34 nvme_xnvme -- xnvme/common.sh@41 -- # io_uring_cmd_fio=('randread' 'randwrite') 00:26:56.670 23:08:34 nvme_xnvme -- xnvme/common.sh@41 -- # declare -a io_uring_cmd_fio 00:26:56.670 23:08:34 nvme_xnvme -- xnvme/common.sh@45 -- # xnvme_filename=(['libaio']='/dev/nvme0n1' ['io_uring']='/dev/nvme0n1' ['io_uring_cmd']='/dev/ng0n1') 00:26:56.670 23:08:34 nvme_xnvme -- xnvme/common.sh@45 -- # declare -A xnvme_filename 00:26:56.670 23:08:34 nvme_xnvme -- xnvme/common.sh@51 -- # xnvme_conserve_cpu=('false' 'true') 00:26:56.670 23:08:34 nvme_xnvme -- xnvme/common.sh@51 -- # declare -a xnvme_conserve_cpu 00:26:56.670 23:08:34 nvme_xnvme -- xnvme/common.sh@57 -- # method_bdev_xnvme_create_0=(['name']='xnvme_bdev' ['filename']='/dev/nvme0n1' ['io_mechanism']='libaio' ['conserve_cpu']='false') 00:26:56.670 23:08:34 nvme_xnvme -- xnvme/common.sh@57 -- # declare -A method_bdev_xnvme_create_0 00:26:56.670 23:08:34 nvme_xnvme -- xnvme/common.sh@89 -- # prep_nvme 00:26:56.670 23:08:34 nvme_xnvme -- xnvme/common.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:26:56.932 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:56.932 Waiting for block devices as requested 00:26:56.932 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:26:57.193 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:26:57.193 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:26:57.193 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:27:02.506 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:27:02.506 23:08:40 nvme_xnvme -- xnvme/common.sh@73 -- # modprobe -r nvme 00:27:02.506 23:08:40 nvme_xnvme -- xnvme/common.sh@74 -- # nproc 00:27:02.506 23:08:40 nvme_xnvme -- xnvme/common.sh@74 -- # modprobe nvme poll_queues=10 00:27:02.767 23:08:41 nvme_xnvme -- xnvme/common.sh@77 -- # local nvme 00:27:02.767 23:08:41 nvme_xnvme -- xnvme/common.sh@78 -- # for nvme in /dev/nvme*n!(*p*) 00:27:02.767 23:08:41 nvme_xnvme -- xnvme/common.sh@79 -- # block_in_use /dev/nvme0n1 00:27:02.767 23:08:41 nvme_xnvme -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:27:02.767 23:08:41 nvme_xnvme -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:27:02.767 No valid GPT data, bailing 00:27:02.767 23:08:41 nvme_xnvme -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:02.767 23:08:41 nvme_xnvme -- scripts/common.sh@394 -- # pt= 00:27:02.767 23:08:41 nvme_xnvme -- scripts/common.sh@395 -- # return 1 00:27:02.767 23:08:41 nvme_xnvme -- xnvme/common.sh@80 -- # xnvme_filename["libaio"]=/dev/nvme0n1 00:27:02.767 23:08:41 nvme_xnvme -- xnvme/common.sh@81 -- # xnvme_filename["io_uring"]=/dev/nvme0n1 00:27:02.767 23:08:41 nvme_xnvme -- xnvme/common.sh@82 -- # xnvme_filename["io_uring_cmd"]=/dev/ng0n1 00:27:02.767 23:08:41 nvme_xnvme -- xnvme/common.sh@83 -- # return 0 00:27:02.767 23:08:41 nvme_xnvme -- xnvme/xnvme.sh@73 -- # trap 'killprocess "$spdk_tgt"' EXIT 00:27:02.767 23:08:41 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:27:02.767 23:08:41 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:27:02.767 23:08:41 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:27:02.767 23:08:41 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:27:02.767 23:08:41 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:27:02.767 23:08:41 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:27:02.767 23:08:41 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:27:02.767 23:08:41 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:27:02.767 23:08:41 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:27:02.767 23:08:41 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:02.767 23:08:41 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:02.767 23:08:41 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:27:02.767 ************************************ 00:27:02.767 START TEST xnvme_rpc 00:27:02.767 ************************************ 00:27:02.767 23:08:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:27:02.767 23:08:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:27:02.767 23:08:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:27:02.767 23:08:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:27:02.767 23:08:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:27:02.767 23:08:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=69321 00:27:02.767 23:08:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 69321 00:27:02.767 23:08:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 69321 ']' 00:27:02.767 23:08:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:02.767 23:08:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:02.767 23:08:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:02.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:02.767 23:08:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:02.767 23:08:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:02.767 23:08:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:02.767 [2024-12-09 23:08:41.187302] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:27:02.767 [2024-12-09 23:08:41.187429] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69321 ] 00:27:03.026 [2024-12-09 23:08:41.347911] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:03.026 [2024-12-09 23:08:41.448909] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:03.597 23:08:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:03.597 23:08:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:27:03.597 23:08:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio '' 00:27:03.597 23:08:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.597 23:08:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:03.857 xnvme_bdev 00:27:03.857 23:08:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.857 23:08:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:27:03.857 23:08:42 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:27:03.857 23:08:42 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:27:03.857 23:08:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.857 23:08:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:03.857 23:08:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.857 23:08:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:27:03.857 23:08:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:27:03.857 23:08:42 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:27:03.857 23:08:42 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:27:03.857 23:08:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.857 23:08:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:03.857 23:08:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.857 23:08:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:27:03.857 23:08:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:27:03.857 23:08:42 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:27:03.857 23:08:42 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:27:03.857 23:08:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.857 23:08:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:03.857 23:08:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.857 23:08:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:27:03.857 23:08:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:27:03.857 23:08:42 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:27:03.857 23:08:42 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:27:03.857 23:08:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.857 23:08:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:03.857 23:08:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.857 23:08:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:27:03.857 23:08:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:27:03.857 23:08:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.857 23:08:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:03.857 23:08:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.857 23:08:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 69321 00:27:03.857 23:08:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 69321 ']' 00:27:03.857 23:08:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 69321 00:27:03.857 23:08:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:27:03.857 23:08:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:03.857 23:08:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69321 00:27:03.857 23:08:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:03.857 23:08:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:03.857 killing process with pid 69321 00:27:03.857 23:08:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69321' 00:27:03.857 23:08:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 69321 00:27:03.857 23:08:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 69321 00:27:05.769 00:27:05.769 real 0m2.596s 00:27:05.769 user 0m2.613s 00:27:05.769 sys 0m0.350s 00:27:05.769 23:08:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:05.769 23:08:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:05.769 ************************************ 00:27:05.769 END TEST xnvme_rpc 00:27:05.769 ************************************ 00:27:05.769 23:08:43 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:27:05.769 23:08:43 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:05.769 23:08:43 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:05.769 23:08:43 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:27:05.769 ************************************ 00:27:05.769 START TEST xnvme_bdevperf 00:27:05.769 ************************************ 00:27:05.769 23:08:43 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:27:05.769 23:08:43 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:27:05.769 23:08:43 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:27:05.769 23:08:43 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:27:05.769 23:08:43 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:27:05.769 23:08:43 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:27:05.769 23:08:43 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:27:05.769 23:08:43 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:05.769 { 00:27:05.769 "subsystems": [ 00:27:05.769 { 00:27:05.769 "subsystem": "bdev", 00:27:05.769 "config": [ 00:27:05.769 { 00:27:05.769 "params": { 00:27:05.769 "io_mechanism": "libaio", 00:27:05.769 "conserve_cpu": false, 00:27:05.769 "filename": "/dev/nvme0n1", 00:27:05.769 "name": "xnvme_bdev" 00:27:05.769 }, 00:27:05.769 "method": "bdev_xnvme_create" 00:27:05.769 }, 00:27:05.769 { 00:27:05.769 "method": "bdev_wait_for_examine" 00:27:05.769 } 00:27:05.769 ] 00:27:05.769 } 00:27:05.769 ] 00:27:05.769 } 00:27:05.769 [2024-12-09 23:08:43.819425] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:27:05.769 [2024-12-09 23:08:43.819544] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69390 ] 00:27:05.769 [2024-12-09 23:08:43.979476] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:05.769 [2024-12-09 23:08:44.078865] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:06.068 Running I/O for 5 seconds... 00:27:07.952 37523.00 IOPS, 146.57 MiB/s [2024-12-09T23:08:47.356Z] 37160.00 IOPS, 145.16 MiB/s [2024-12-09T23:08:48.744Z] 37120.00 IOPS, 145.00 MiB/s [2024-12-09T23:08:49.686Z] 36903.75 IOPS, 144.16 MiB/s 00:27:11.224 Latency(us) 00:27:11.224 [2024-12-09T23:08:49.686Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:11.224 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:27:11.224 xnvme_bdev : 5.00 36999.46 144.53 0.00 0.00 1725.45 189.83 11141.12 00:27:11.224 [2024-12-09T23:08:49.686Z] =================================================================================================================== 00:27:11.224 [2024-12-09T23:08:49.686Z] Total : 36999.46 144.53 0.00 0.00 1725.45 189.83 11141.12 00:27:11.796 23:08:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:27:11.796 23:08:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:27:11.796 23:08:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:27:11.796 23:08:50 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:27:11.796 23:08:50 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:11.796 { 00:27:11.796 "subsystems": [ 00:27:11.796 { 00:27:11.796 "subsystem": "bdev", 00:27:11.796 "config": [ 00:27:11.796 { 00:27:11.796 "params": { 00:27:11.796 "io_mechanism": "libaio", 00:27:11.796 "conserve_cpu": false, 00:27:11.796 "filename": "/dev/nvme0n1", 00:27:11.796 "name": "xnvme_bdev" 00:27:11.796 }, 00:27:11.796 "method": "bdev_xnvme_create" 00:27:11.796 }, 00:27:11.796 { 00:27:11.796 "method": "bdev_wait_for_examine" 00:27:11.796 } 00:27:11.796 ] 00:27:11.796 } 00:27:11.796 ] 00:27:11.796 } 00:27:11.796 [2024-12-09 23:08:50.162518] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:27:11.796 [2024-12-09 23:08:50.162678] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69465 ] 00:27:12.055 [2024-12-09 23:08:50.336196] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:12.055 [2024-12-09 23:08:50.437004] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:12.316 Running I/O for 5 seconds... 00:27:14.642 34818.00 IOPS, 136.01 MiB/s [2024-12-09T23:08:54.048Z] 33522.50 IOPS, 130.95 MiB/s [2024-12-09T23:08:54.991Z] 32767.67 IOPS, 128.00 MiB/s [2024-12-09T23:08:56.003Z] 33304.25 IOPS, 130.09 MiB/s [2024-12-09T23:08:56.003Z] 34029.40 IOPS, 132.93 MiB/s 00:27:17.541 Latency(us) 00:27:17.541 [2024-12-09T23:08:56.003Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:17.541 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:27:17.541 xnvme_bdev : 5.01 33998.65 132.81 0.00 0.00 1877.78 294.60 7662.67 00:27:17.541 [2024-12-09T23:08:56.003Z] =================================================================================================================== 00:27:17.541 [2024-12-09T23:08:56.003Z] Total : 33998.65 132.81 0.00 0.00 1877.78 294.60 7662.67 00:27:18.111 00:27:18.111 real 0m12.691s 00:27:18.111 user 0m4.901s 00:27:18.111 sys 0m6.282s 00:27:18.111 23:08:56 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:18.111 23:08:56 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:18.111 ************************************ 00:27:18.111 END TEST xnvme_bdevperf 00:27:18.111 ************************************ 00:27:18.111 23:08:56 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:27:18.111 23:08:56 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:18.111 23:08:56 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:18.111 23:08:56 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:27:18.112 ************************************ 00:27:18.112 START TEST xnvme_fio_plugin 00:27:18.112 ************************************ 00:27:18.112 23:08:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:27:18.112 23:08:56 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:27:18.112 23:08:56 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:27:18.112 23:08:56 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:27:18.112 23:08:56 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:27:18.112 23:08:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:27:18.112 23:08:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:27:18.112 23:08:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:18.112 23:08:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:27:18.112 23:08:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:18.112 23:08:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:27:18.112 23:08:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:27:18.112 23:08:56 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:27:18.112 23:08:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:27:18.112 23:08:56 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:27:18.112 23:08:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:27:18.112 23:08:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:18.112 23:08:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:27:18.112 23:08:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:27:18.112 23:08:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:27:18.112 23:08:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:27:18.112 23:08:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:27:18.112 23:08:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:27:18.112 23:08:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:27:18.112 { 00:27:18.112 "subsystems": [ 00:27:18.112 { 00:27:18.112 "subsystem": "bdev", 00:27:18.112 "config": [ 00:27:18.112 { 00:27:18.112 "params": { 00:27:18.112 "io_mechanism": "libaio", 00:27:18.112 "conserve_cpu": false, 00:27:18.112 "filename": "/dev/nvme0n1", 00:27:18.112 "name": "xnvme_bdev" 00:27:18.112 }, 00:27:18.112 "method": "bdev_xnvme_create" 00:27:18.112 }, 00:27:18.112 { 00:27:18.112 "method": "bdev_wait_for_examine" 00:27:18.112 } 00:27:18.112 ] 00:27:18.112 } 00:27:18.112 ] 00:27:18.112 } 00:27:18.372 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:27:18.372 fio-3.35 00:27:18.373 Starting 1 thread 00:27:25.015 00:27:25.015 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=69584: Mon Dec 9 23:09:02 2024 00:27:25.015 read: IOPS=41.9k, BW=164MiB/s (172MB/s)(819MiB/5002msec) 00:27:25.015 slat (usec): min=3, max=2162, avg=18.76, stdev=56.79 00:27:25.015 clat (usec): min=78, max=7357, avg=999.21, stdev=581.01 00:27:25.015 lat (usec): min=127, max=7369, avg=1017.97, stdev=579.62 00:27:25.015 clat percentiles (usec): 00:27:25.015 | 1.00th=[ 182], 5.00th=[ 269], 10.00th=[ 355], 20.00th=[ 515], 00:27:25.015 | 30.00th=[ 660], 40.00th=[ 775], 50.00th=[ 889], 60.00th=[ 1020], 00:27:25.015 | 70.00th=[ 1172], 80.00th=[ 1401], 90.00th=[ 1795], 95.00th=[ 2180], 00:27:25.015 | 99.00th=[ 2802], 99.50th=[ 3064], 99.90th=[ 3818], 99.95th=[ 4113], 00:27:25.015 | 99.99th=[ 5407] 00:27:25.015 bw ( KiB/s): min=120144, max=190784, per=98.66%, avg=165329.78, stdev=27415.30, samples=9 00:27:25.015 iops : min=30036, max=47696, avg=41332.44, stdev=6853.83, samples=9 00:27:25.015 lat (usec) : 100=0.01%, 250=4.02%, 500=15.07%, 750=18.41%, 1000=21.19% 00:27:25.015 lat (msec) : 2=34.26%, 4=6.98%, 10=0.06% 00:27:25.015 cpu : usr=35.85%, sys=50.79%, ctx=30, majf=0, minf=764 00:27:25.015 IO depths : 1=0.2%, 2=0.8%, 4=2.9%, 8=9.1%, 16=24.7%, 32=60.4%, >=64=2.0% 00:27:25.015 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:25.015 complete : 0=0.0%, 4=98.1%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.7%, >=64=0.0% 00:27:25.015 issued rwts: total=209560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:25.015 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:25.015 00:27:25.015 Run status group 0 (all jobs): 00:27:25.015 READ: bw=164MiB/s (172MB/s), 164MiB/s-164MiB/s (172MB/s-172MB/s), io=819MiB (858MB), run=5002-5002msec 00:27:25.015 ----------------------------------------------------- 00:27:25.015 Suppressions used: 00:27:25.015 count bytes template 00:27:25.015 1 11 /usr/src/fio/parse.c 00:27:25.015 1 8 libtcmalloc_minimal.so 00:27:25.015 1 904 libcrypto.so 00:27:25.015 ----------------------------------------------------- 00:27:25.015 00:27:25.015 23:09:03 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:27:25.015 23:09:03 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:27:25.015 23:09:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:27:25.015 23:09:03 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:27:25.015 23:09:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:27:25.015 23:09:03 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:27:25.015 23:09:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:25.015 23:09:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:27:25.015 23:09:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:27:25.015 23:09:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:25.015 23:09:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:27:25.015 23:09:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:27:25.015 23:09:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:27:25.015 23:09:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:25.015 23:09:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:27:25.016 23:09:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:27:25.016 23:09:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:27:25.016 23:09:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:27:25.016 23:09:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:27:25.016 23:09:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:27:25.016 23:09:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:27:25.016 { 00:27:25.016 "subsystems": [ 00:27:25.016 { 00:27:25.016 "subsystem": "bdev", 00:27:25.016 "config": [ 00:27:25.016 { 00:27:25.016 "params": { 00:27:25.016 "io_mechanism": "libaio", 00:27:25.016 "conserve_cpu": false, 00:27:25.016 "filename": "/dev/nvme0n1", 00:27:25.016 "name": "xnvme_bdev" 00:27:25.016 }, 00:27:25.016 "method": "bdev_xnvme_create" 00:27:25.016 }, 00:27:25.016 { 00:27:25.016 "method": "bdev_wait_for_examine" 00:27:25.016 } 00:27:25.016 ] 00:27:25.016 } 00:27:25.016 ] 00:27:25.016 } 00:27:25.313 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:27:25.313 fio-3.35 00:27:25.313 Starting 1 thread 00:27:31.894 00:27:31.894 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=69676: Mon Dec 9 23:09:09 2024 00:27:31.894 write: IOPS=29.3k, BW=114MiB/s (120MB/s)(573MiB/5001msec); 0 zone resets 00:27:31.894 slat (usec): min=4, max=1243, avg=18.42, stdev=35.59 00:27:31.894 clat (usec): min=6, max=342329, avg=1758.10, stdev=9941.87 00:27:31.894 lat (usec): min=42, max=342333, avg=1776.52, stdev=9941.37 00:27:31.894 clat percentiles (usec): 00:27:31.894 | 1.00th=[ 92], 5.00th=[ 212], 10.00th=[ 297], 20.00th=[ 461], 00:27:31.894 | 30.00th=[ 619], 40.00th=[ 775], 50.00th=[ 930], 60.00th=[ 1123], 00:27:31.894 | 70.00th=[ 1385], 80.00th=[ 1860], 90.00th=[ 2671], 95.00th=[ 3687], 00:27:31.894 | 99.00th=[ 7308], 99.50th=[ 8586], 99.90th=[181404], 99.95th=[183501], 00:27:31.894 | 99.99th=[341836] 00:27:31.894 bw ( KiB/s): min=33912, max=163936, per=95.13%, avg=111513.89, stdev=49400.34, samples=9 00:27:31.894 iops : min= 8478, max=40984, avg=27878.44, stdev=12350.05, samples=9 00:27:31.894 lat (usec) : 10=0.01%, 20=0.03%, 50=0.33%, 100=0.79%, 250=5.98% 00:27:31.894 lat (usec) : 500=15.36%, 750=15.84%, 1000=15.80% 00:27:31.895 lat (msec) : 2=28.11%, 4=13.45%, 10=3.96%, 20=0.04%, 50=0.04% 00:27:31.895 lat (msec) : 100=0.09%, 250=0.13%, 500=0.04% 00:27:31.895 cpu : usr=52.82%, sys=31.06%, ctx=35, majf=0, minf=765 00:27:31.895 IO depths : 1=0.1%, 2=0.7%, 4=2.5%, 8=6.5%, 16=17.4%, 32=68.2%, >=64=4.6% 00:27:31.895 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:31.895 complete : 0=0.0%, 4=97.0%, 8=0.5%, 16=0.6%, 32=0.6%, 64=1.3%, >=64=0.0% 00:27:31.895 issued rwts: total=0,146563,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:31.895 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:31.895 00:27:31.895 Run status group 0 (all jobs): 00:27:31.895 WRITE: bw=114MiB/s (120MB/s), 114MiB/s-114MiB/s (120MB/s-120MB/s), io=573MiB (600MB), run=5001-5001msec 00:27:31.895 ----------------------------------------------------- 00:27:31.895 Suppressions used: 00:27:31.895 count bytes template 00:27:31.895 1 11 /usr/src/fio/parse.c 00:27:31.895 1 8 libtcmalloc_minimal.so 00:27:31.895 1 904 libcrypto.so 00:27:31.895 ----------------------------------------------------- 00:27:31.895 00:27:31.895 00:27:31.895 real 0m13.671s 00:27:31.895 user 0m7.212s 00:27:31.895 sys 0m4.602s 00:27:31.895 23:09:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:31.895 23:09:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:27:31.895 ************************************ 00:27:31.895 END TEST xnvme_fio_plugin 00:27:31.895 ************************************ 00:27:31.895 23:09:10 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:27:31.895 23:09:10 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:27:31.895 23:09:10 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:27:31.895 23:09:10 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:27:31.895 23:09:10 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:31.895 23:09:10 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:31.895 23:09:10 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:27:31.895 ************************************ 00:27:31.895 START TEST xnvme_rpc 00:27:31.895 ************************************ 00:27:31.895 23:09:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:27:31.895 23:09:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:27:31.895 23:09:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:27:31.895 23:09:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:27:31.895 23:09:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:27:31.895 23:09:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=69768 00:27:31.895 23:09:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 69768 00:27:31.895 23:09:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 69768 ']' 00:27:31.895 23:09:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:31.895 23:09:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:31.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:31.895 23:09:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:31.895 23:09:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:31.895 23:09:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:31.895 23:09:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:31.895 [2024-12-09 23:09:10.274848] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:27:31.895 [2024-12-09 23:09:10.274972] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69768 ] 00:27:32.155 [2024-12-09 23:09:10.430897] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:32.155 [2024-12-09 23:09:10.531017] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:32.725 23:09:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:32.725 23:09:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:27:32.725 23:09:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio -c 00:27:32.725 23:09:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.725 23:09:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:32.725 xnvme_bdev 00:27:32.725 23:09:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.725 23:09:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:27:32.725 23:09:11 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:27:32.725 23:09:11 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:27:32.725 23:09:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.725 23:09:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:32.725 23:09:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.725 23:09:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:27:32.725 23:09:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:27:32.725 23:09:11 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:27:32.725 23:09:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.725 23:09:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:32.725 23:09:11 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:27:32.987 23:09:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.987 23:09:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:27:32.987 23:09:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:27:32.987 23:09:11 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:27:32.987 23:09:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.987 23:09:11 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:27:32.987 23:09:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:32.987 23:09:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.987 23:09:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:27:32.987 23:09:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:27:32.987 23:09:11 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:27:32.988 23:09:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.988 23:09:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:32.988 23:09:11 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:27:32.988 23:09:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.988 23:09:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:27:32.988 23:09:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:27:32.988 23:09:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.988 23:09:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:32.988 23:09:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.988 23:09:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 69768 00:27:32.988 23:09:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 69768 ']' 00:27:32.988 23:09:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 69768 00:27:32.988 23:09:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:27:32.988 23:09:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:32.988 23:09:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69768 00:27:32.988 23:09:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:32.988 23:09:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:32.988 23:09:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69768' 00:27:32.988 killing process with pid 69768 00:27:32.988 23:09:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 69768 00:27:32.988 23:09:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 69768 00:27:34.906 00:27:34.906 real 0m2.675s 00:27:34.906 user 0m2.758s 00:27:34.906 sys 0m0.359s 00:27:34.906 23:09:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:34.906 23:09:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:34.906 ************************************ 00:27:34.906 END TEST xnvme_rpc 00:27:34.906 ************************************ 00:27:34.906 23:09:12 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:27:34.906 23:09:12 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:34.906 23:09:12 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:34.906 23:09:12 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:27:34.906 ************************************ 00:27:34.906 START TEST xnvme_bdevperf 00:27:34.906 ************************************ 00:27:34.906 23:09:12 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:27:34.906 23:09:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:27:34.906 23:09:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:27:34.906 23:09:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:27:34.906 23:09:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:27:34.906 23:09:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:27:34.906 23:09:12 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:27:34.906 23:09:12 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:34.906 { 00:27:34.906 "subsystems": [ 00:27:34.906 { 00:27:34.906 "subsystem": "bdev", 00:27:34.906 "config": [ 00:27:34.906 { 00:27:34.906 "params": { 00:27:34.906 "io_mechanism": "libaio", 00:27:34.906 "conserve_cpu": true, 00:27:34.906 "filename": "/dev/nvme0n1", 00:27:34.906 "name": "xnvme_bdev" 00:27:34.906 }, 00:27:34.906 "method": "bdev_xnvme_create" 00:27:34.906 }, 00:27:34.906 { 00:27:34.906 "method": "bdev_wait_for_examine" 00:27:34.906 } 00:27:34.906 ] 00:27:34.906 } 00:27:34.906 ] 00:27:34.907 } 00:27:34.907 [2024-12-09 23:09:12.980101] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:27:34.907 [2024-12-09 23:09:12.980230] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69837 ] 00:27:34.907 [2024-12-09 23:09:13.140240] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:34.907 [2024-12-09 23:09:13.239429] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:35.168 Running I/O for 5 seconds... 00:27:37.052 37467.00 IOPS, 146.36 MiB/s [2024-12-09T23:09:16.899Z] 36581.50 IOPS, 142.90 MiB/s [2024-12-09T23:09:17.843Z] 37886.00 IOPS, 147.99 MiB/s [2024-12-09T23:09:18.790Z] 38625.00 IOPS, 150.88 MiB/s [2024-12-09T23:09:18.790Z] 39246.40 IOPS, 153.31 MiB/s 00:27:40.328 Latency(us) 00:27:40.328 [2024-12-09T23:09:18.790Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:40.328 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:27:40.328 xnvme_bdev : 5.00 39232.42 153.25 0.00 0.00 1626.97 41.55 77836.60 00:27:40.328 [2024-12-09T23:09:18.790Z] =================================================================================================================== 00:27:40.328 [2024-12-09T23:09:18.790Z] Total : 39232.42 153.25 0.00 0.00 1626.97 41.55 77836.60 00:27:40.896 23:09:19 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:27:40.896 23:09:19 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:27:40.896 23:09:19 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:27:40.896 23:09:19 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:27:40.896 23:09:19 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:40.896 { 00:27:40.896 "subsystems": [ 00:27:40.896 { 00:27:40.896 "subsystem": "bdev", 00:27:40.896 "config": [ 00:27:40.896 { 00:27:40.896 "params": { 00:27:40.896 "io_mechanism": "libaio", 00:27:40.896 "conserve_cpu": true, 00:27:40.896 "filename": "/dev/nvme0n1", 00:27:40.896 "name": "xnvme_bdev" 00:27:40.896 }, 00:27:40.896 "method": "bdev_xnvme_create" 00:27:40.896 }, 00:27:40.896 { 00:27:40.896 "method": "bdev_wait_for_examine" 00:27:40.896 } 00:27:40.896 ] 00:27:40.896 } 00:27:40.896 ] 00:27:40.896 } 00:27:40.896 [2024-12-09 23:09:19.351430] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:27:40.896 [2024-12-09 23:09:19.351556] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69912 ] 00:27:41.157 [2024-12-09 23:09:19.511848] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:41.157 [2024-12-09 23:09:19.613092] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:41.728 Running I/O for 5 seconds... 00:27:43.615 14594.00 IOPS, 57.01 MiB/s [2024-12-09T23:09:23.060Z] 9470.50 IOPS, 36.99 MiB/s [2024-12-09T23:09:24.001Z] 8878.00 IOPS, 34.68 MiB/s [2024-12-09T23:09:24.944Z] 8803.75 IOPS, 34.39 MiB/s [2024-12-09T23:09:25.204Z] 8661.00 IOPS, 33.83 MiB/s 00:27:46.742 Latency(us) 00:27:46.742 [2024-12-09T23:09:25.204Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:46.742 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:27:46.742 xnvme_bdev : 5.24 8273.92 32.32 0.00 0.00 7729.63 64.59 732390.01 00:27:46.742 [2024-12-09T23:09:25.204Z] =================================================================================================================== 00:27:46.742 [2024-12-09T23:09:25.204Z] Total : 8273.92 32.32 0.00 0.00 7729.63 64.59 732390.01 00:27:47.682 00:27:47.682 real 0m12.981s 00:27:47.682 user 0m7.994s 00:27:47.682 sys 0m3.467s 00:27:47.682 23:09:25 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:47.682 23:09:25 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:47.682 ************************************ 00:27:47.682 END TEST xnvme_bdevperf 00:27:47.682 ************************************ 00:27:47.682 23:09:25 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:27:47.682 23:09:25 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:47.682 23:09:25 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:47.682 23:09:25 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:27:47.682 ************************************ 00:27:47.682 START TEST xnvme_fio_plugin 00:27:47.682 ************************************ 00:27:47.682 23:09:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:27:47.682 23:09:25 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:27:47.682 23:09:25 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:27:47.682 23:09:25 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:27:47.682 23:09:25 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:27:47.682 23:09:25 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:27:47.682 23:09:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:27:47.682 23:09:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:27:47.682 23:09:25 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:27:47.682 23:09:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:47.682 23:09:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:27:47.682 23:09:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:47.682 23:09:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:27:47.682 23:09:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:27:47.682 23:09:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:27:47.682 23:09:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:27:47.682 23:09:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:47.682 23:09:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:27:47.682 23:09:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:27:47.682 23:09:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:27:47.682 23:09:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:27:47.682 23:09:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:27:47.682 23:09:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:27:47.682 23:09:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:27:47.682 { 00:27:47.682 "subsystems": [ 00:27:47.682 { 00:27:47.682 "subsystem": "bdev", 00:27:47.682 "config": [ 00:27:47.682 { 00:27:47.682 "params": { 00:27:47.682 "io_mechanism": "libaio", 00:27:47.682 "conserve_cpu": true, 00:27:47.682 "filename": "/dev/nvme0n1", 00:27:47.682 "name": "xnvme_bdev" 00:27:47.682 }, 00:27:47.682 "method": "bdev_xnvme_create" 00:27:47.682 }, 00:27:47.682 { 00:27:47.682 "method": "bdev_wait_for_examine" 00:27:47.682 } 00:27:47.682 ] 00:27:47.682 } 00:27:47.682 ] 00:27:47.682 } 00:27:47.682 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:27:47.682 fio-3.35 00:27:47.682 Starting 1 thread 00:27:54.255 00:27:54.255 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70031: Mon Dec 9 23:09:31 2024 00:27:54.255 read: IOPS=44.9k, BW=175MiB/s (184MB/s)(877MiB/5001msec) 00:27:54.255 slat (usec): min=3, max=1157, avg=18.50, stdev=27.55 00:27:54.255 clat (usec): min=20, max=179700, avg=873.01, stdev=1661.34 00:27:54.255 lat (usec): min=112, max=179704, avg=891.51, stdev=1662.24 00:27:54.255 clat percentiles (usec): 00:27:54.255 | 1.00th=[ 167], 5.00th=[ 247], 10.00th=[ 322], 20.00th=[ 453], 00:27:54.255 | 30.00th=[ 562], 40.00th=[ 652], 50.00th=[ 750], 60.00th=[ 857], 00:27:54.255 | 70.00th=[ 979], 80.00th=[ 1139], 90.00th=[ 1418], 95.00th=[ 1745], 00:27:54.255 | 99.00th=[ 2704], 99.50th=[ 3064], 99.90th=[ 4424], 99.95th=[26346], 00:27:54.255 | 99.99th=[61604] 00:27:54.255 bw ( KiB/s): min=172344, max=188200, per=99.13%, avg=177920.89, stdev=5353.16, samples=9 00:27:54.255 iops : min=43086, max=47050, avg=44480.22, stdev=1338.29, samples=9 00:27:54.255 lat (usec) : 50=0.01%, 100=0.01%, 250=5.16%, 500=19.27%, 750=25.62% 00:27:54.255 lat (usec) : 1000=21.49% 00:27:54.255 lat (msec) : 2=25.25%, 4=3.06%, 10=0.08%, 20=0.01%, 50=0.02% 00:27:54.255 lat (msec) : 100=0.03%, 250=0.01% 00:27:54.255 cpu : usr=31.78%, sys=48.72%, ctx=74, majf=0, minf=764 00:27:54.255 IO depths : 1=0.1%, 2=1.1%, 4=4.1%, 8=10.8%, 16=24.9%, 32=57.0%, >=64=1.9% 00:27:54.255 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:54.255 complete : 0=0.0%, 4=98.2%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.6%, >=64=0.0% 00:27:54.255 issued rwts: total=224407,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:54.255 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:54.255 00:27:54.255 Run status group 0 (all jobs): 00:27:54.255 READ: bw=175MiB/s (184MB/s), 175MiB/s-175MiB/s (184MB/s-184MB/s), io=877MiB (919MB), run=5001-5001msec 00:27:54.514 ----------------------------------------------------- 00:27:54.514 Suppressions used: 00:27:54.514 count bytes template 00:27:54.514 1 11 /usr/src/fio/parse.c 00:27:54.514 1 8 libtcmalloc_minimal.so 00:27:54.514 1 904 libcrypto.so 00:27:54.514 ----------------------------------------------------- 00:27:54.514 00:27:54.514 23:09:32 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:27:54.514 23:09:32 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:27:54.514 23:09:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:27:54.514 23:09:32 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:27:54.514 23:09:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:27:54.514 23:09:32 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:27:54.514 23:09:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:27:54.514 23:09:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:54.514 23:09:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:27:54.514 23:09:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:54.514 23:09:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:27:54.514 23:09:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:27:54.514 23:09:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:27:54.514 23:09:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:54.514 23:09:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:27:54.514 23:09:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:27:54.514 23:09:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:27:54.514 23:09:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:27:54.514 23:09:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:27:54.514 23:09:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:27:54.514 23:09:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:27:54.514 { 00:27:54.514 "subsystems": [ 00:27:54.514 { 00:27:54.514 "subsystem": "bdev", 00:27:54.514 "config": [ 00:27:54.514 { 00:27:54.514 "params": { 00:27:54.514 "io_mechanism": "libaio", 00:27:54.514 "conserve_cpu": true, 00:27:54.514 "filename": "/dev/nvme0n1", 00:27:54.514 "name": "xnvme_bdev" 00:27:54.514 }, 00:27:54.514 "method": "bdev_xnvme_create" 00:27:54.514 }, 00:27:54.514 { 00:27:54.514 "method": "bdev_wait_for_examine" 00:27:54.514 } 00:27:54.514 ] 00:27:54.514 } 00:27:54.514 ] 00:27:54.514 } 00:27:54.514 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:27:54.514 fio-3.35 00:27:54.514 Starting 1 thread 00:28:01.097 00:28:01.097 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70123: Mon Dec 9 23:09:38 2024 00:28:01.097 write: IOPS=34.1k, BW=133MiB/s (140MB/s)(667MiB/5005msec); 0 zone resets 00:28:01.097 slat (usec): min=3, max=1133, avg=16.43, stdev=40.85 00:28:01.097 clat (usec): min=9, max=589884, avg=1434.20, stdev=12032.98 00:28:01.097 lat (usec): min=50, max=589891, avg=1450.62, stdev=12032.56 00:28:01.097 clat percentiles (usec): 00:28:01.097 | 1.00th=[ 153], 5.00th=[ 262], 10.00th=[ 338], 20.00th=[ 486], 00:28:01.097 | 30.00th=[ 611], 40.00th=[ 709], 50.00th=[ 799], 60.00th=[ 889], 00:28:01.097 | 70.00th=[ 1004], 80.00th=[ 1188], 90.00th=[ 1647], 95.00th=[ 4080], 00:28:01.097 | 99.00th=[ 8455], 99.50th=[ 9503], 99.90th=[ 12387], 99.95th=[202376], 00:28:01.097 | 99.99th=[591397] 00:28:01.097 bw ( KiB/s): min=47249, max=201424, per=100.00%, avg=136604.10, stdev=61691.39, samples=10 00:28:01.097 iops : min=11812, max=50356, avg=34151.00, stdev=15422.89, samples=10 00:28:01.097 lat (usec) : 10=0.01%, 20=0.01%, 50=0.05%, 100=0.24%, 250=4.06% 00:28:01.097 lat (usec) : 500=16.65%, 750=23.59%, 1000=24.92% 00:28:01.097 lat (msec) : 2=22.32%, 4=3.09%, 10=4.74%, 20=0.26%, 100=0.01% 00:28:01.097 lat (msec) : 250=0.04%, 500=0.01%, 750=0.04% 00:28:01.097 cpu : usr=54.40%, sys=35.83%, ctx=11, majf=0, minf=765 00:28:01.097 IO depths : 1=0.2%, 2=0.9%, 4=3.4%, 8=9.6%, 16=21.9%, 32=61.0%, >=64=3.1% 00:28:01.097 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:01.097 complete : 0=0.0%, 4=97.6%, 8=0.3%, 16=0.3%, 32=0.3%, 64=1.4%, >=64=0.0% 00:28:01.097 issued rwts: total=0,170828,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:01.097 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:01.097 00:28:01.097 Run status group 0 (all jobs): 00:28:01.097 WRITE: bw=133MiB/s (140MB/s), 133MiB/s-133MiB/s (140MB/s-140MB/s), io=667MiB (700MB), run=5005-5005msec 00:28:01.097 ----------------------------------------------------- 00:28:01.097 Suppressions used: 00:28:01.097 count bytes template 00:28:01.097 1 11 /usr/src/fio/parse.c 00:28:01.097 1 8 libtcmalloc_minimal.so 00:28:01.097 1 904 libcrypto.so 00:28:01.097 ----------------------------------------------------- 00:28:01.097 00:28:01.097 00:28:01.097 real 0m13.582s 00:28:01.097 user 0m7.012s 00:28:01.097 sys 0m4.718s 00:28:01.097 23:09:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:01.097 23:09:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:28:01.097 ************************************ 00:28:01.097 END TEST xnvme_fio_plugin 00:28:01.097 ************************************ 00:28:01.097 23:09:39 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:28:01.097 23:09:39 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:28:01.097 23:09:39 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:28:01.097 23:09:39 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:28:01.097 23:09:39 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:28:01.097 23:09:39 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:28:01.097 23:09:39 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:28:01.097 23:09:39 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:28:01.097 23:09:39 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:28:01.097 23:09:39 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:01.097 23:09:39 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:01.097 23:09:39 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:28:01.358 ************************************ 00:28:01.358 START TEST xnvme_rpc 00:28:01.358 ************************************ 00:28:01.358 23:09:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:28:01.358 23:09:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:28:01.358 23:09:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:28:01.358 23:09:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:28:01.358 23:09:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:28:01.358 23:09:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=70208 00:28:01.358 23:09:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 70208 00:28:01.358 23:09:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 70208 ']' 00:28:01.358 23:09:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:01.358 23:09:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:01.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:01.358 23:09:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:01.358 23:09:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:01.358 23:09:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:28:01.358 23:09:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:01.358 [2024-12-09 23:09:39.624352] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:28:01.358 [2024-12-09 23:09:39.624454] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70208 ] 00:28:01.358 [2024-12-09 23:09:39.780332] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:01.619 [2024-12-09 23:09:39.881458] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:02.191 23:09:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:02.191 23:09:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:28:02.191 23:09:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring '' 00:28:02.191 23:09:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.191 23:09:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:28:02.191 xnvme_bdev 00:28:02.191 23:09:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.191 23:09:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:28:02.191 23:09:40 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:28:02.191 23:09:40 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:28:02.191 23:09:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.191 23:09:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:28:02.191 23:09:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.191 23:09:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:28:02.192 23:09:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:28:02.192 23:09:40 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:28:02.192 23:09:40 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:28:02.192 23:09:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.192 23:09:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:28:02.192 23:09:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.192 23:09:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:28:02.192 23:09:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:28:02.192 23:09:40 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:28:02.192 23:09:40 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:28:02.192 23:09:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.192 23:09:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:28:02.192 23:09:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.192 23:09:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:28:02.192 23:09:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:28:02.192 23:09:40 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:28:02.192 23:09:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.192 23:09:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:28:02.192 23:09:40 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:28:02.192 23:09:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.192 23:09:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:28:02.192 23:09:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:28:02.192 23:09:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.192 23:09:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:28:02.192 23:09:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.192 23:09:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 70208 00:28:02.192 23:09:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 70208 ']' 00:28:02.192 23:09:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 70208 00:28:02.192 23:09:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:28:02.192 23:09:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:02.192 23:09:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70208 00:28:02.192 23:09:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:02.192 23:09:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:02.192 killing process with pid 70208 00:28:02.192 23:09:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70208' 00:28:02.192 23:09:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 70208 00:28:02.192 23:09:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 70208 00:28:04.176 00:28:04.176 real 0m2.586s 00:28:04.176 user 0m2.684s 00:28:04.176 sys 0m0.363s 00:28:04.176 23:09:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:04.176 23:09:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:28:04.176 ************************************ 00:28:04.176 END TEST xnvme_rpc 00:28:04.176 ************************************ 00:28:04.176 23:09:42 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:28:04.176 23:09:42 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:04.176 23:09:42 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:04.176 23:09:42 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:28:04.176 ************************************ 00:28:04.176 START TEST xnvme_bdevperf 00:28:04.176 ************************************ 00:28:04.176 23:09:42 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:28:04.176 23:09:42 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:28:04.176 23:09:42 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:28:04.176 23:09:42 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:28:04.176 23:09:42 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:28:04.176 23:09:42 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:28:04.176 23:09:42 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:28:04.176 23:09:42 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:04.176 { 00:28:04.176 "subsystems": [ 00:28:04.176 { 00:28:04.176 "subsystem": "bdev", 00:28:04.176 "config": [ 00:28:04.176 { 00:28:04.176 "params": { 00:28:04.176 "io_mechanism": "io_uring", 00:28:04.176 "conserve_cpu": false, 00:28:04.176 "filename": "/dev/nvme0n1", 00:28:04.176 "name": "xnvme_bdev" 00:28:04.176 }, 00:28:04.176 "method": "bdev_xnvme_create" 00:28:04.176 }, 00:28:04.176 { 00:28:04.176 "method": "bdev_wait_for_examine" 00:28:04.176 } 00:28:04.176 ] 00:28:04.176 } 00:28:04.176 ] 00:28:04.176 } 00:28:04.176 [2024-12-09 23:09:42.250595] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:28:04.176 [2024-12-09 23:09:42.250722] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70272 ] 00:28:04.176 [2024-12-09 23:09:42.408920] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:04.176 [2024-12-09 23:09:42.511593] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:04.436 Running I/O for 5 seconds... 00:28:06.316 49092.00 IOPS, 191.77 MiB/s [2024-12-09T23:09:46.169Z] 51976.00 IOPS, 203.03 MiB/s [2024-12-09T23:09:47.117Z] 50903.33 IOPS, 198.84 MiB/s [2024-12-09T23:09:48.073Z] 48973.25 IOPS, 191.30 MiB/s [2024-12-09T23:09:48.073Z] 49218.00 IOPS, 192.26 MiB/s 00:28:09.611 Latency(us) 00:28:09.611 [2024-12-09T23:09:48.073Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:09.611 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:28:09.611 xnvme_bdev : 5.00 49190.46 192.15 0.00 0.00 1296.60 299.32 13006.38 00:28:09.611 [2024-12-09T23:09:48.073Z] =================================================================================================================== 00:28:09.611 [2024-12-09T23:09:48.073Z] Total : 49190.46 192.15 0.00 0.00 1296.60 299.32 13006.38 00:28:10.184 23:09:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:28:10.184 23:09:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:28:10.184 23:09:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:28:10.184 23:09:48 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:28:10.184 23:09:48 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:10.184 { 00:28:10.184 "subsystems": [ 00:28:10.184 { 00:28:10.184 "subsystem": "bdev", 00:28:10.184 "config": [ 00:28:10.184 { 00:28:10.184 "params": { 00:28:10.184 "io_mechanism": "io_uring", 00:28:10.184 "conserve_cpu": false, 00:28:10.184 "filename": "/dev/nvme0n1", 00:28:10.184 "name": "xnvme_bdev" 00:28:10.184 }, 00:28:10.184 "method": "bdev_xnvme_create" 00:28:10.184 }, 00:28:10.184 { 00:28:10.184 "method": "bdev_wait_for_examine" 00:28:10.184 } 00:28:10.184 ] 00:28:10.184 } 00:28:10.184 ] 00:28:10.184 } 00:28:10.184 [2024-12-09 23:09:48.541997] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:28:10.184 [2024-12-09 23:09:48.542119] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70348 ] 00:28:10.445 [2024-12-09 23:09:48.700500] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:10.446 [2024-12-09 23:09:48.804916] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:10.707 Running I/O for 5 seconds... 00:28:13.032 4714.00 IOPS, 18.41 MiB/s [2024-12-09T23:09:52.067Z] 7504.00 IOPS, 29.31 MiB/s [2024-12-09T23:09:53.448Z] 8263.33 IOPS, 32.28 MiB/s [2024-12-09T23:09:54.401Z] 9025.00 IOPS, 35.25 MiB/s [2024-12-09T23:09:54.401Z] 10257.80 IOPS, 40.07 MiB/s 00:28:15.939 Latency(us) 00:28:15.939 [2024-12-09T23:09:54.401Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:15.939 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:28:15.939 xnvme_bdev : 5.01 10262.35 40.09 0.00 0.00 6229.88 53.96 477505.38 00:28:15.939 [2024-12-09T23:09:54.401Z] =================================================================================================================== 00:28:15.939 [2024-12-09T23:09:54.401Z] Total : 10262.35 40.09 0.00 0.00 6229.88 53.96 477505.38 00:28:16.511 00:28:16.511 real 0m12.731s 00:28:16.511 user 0m6.033s 00:28:16.511 sys 0m6.473s 00:28:16.511 23:09:54 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:16.511 23:09:54 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:16.511 ************************************ 00:28:16.511 END TEST xnvme_bdevperf 00:28:16.511 ************************************ 00:28:16.511 23:09:54 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:28:16.511 23:09:54 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:16.511 23:09:54 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:16.511 23:09:54 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:28:16.511 ************************************ 00:28:16.511 START TEST xnvme_fio_plugin 00:28:16.511 ************************************ 00:28:16.511 23:09:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:28:16.511 23:09:54 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:28:16.511 23:09:54 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:28:16.511 23:09:54 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:28:16.511 23:09:54 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:28:16.511 23:09:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:28:16.511 23:09:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:28:16.512 23:09:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:16.512 23:09:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:28:16.512 23:09:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:16.512 23:09:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:28:16.512 23:09:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:28:16.512 23:09:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:28:16.512 23:09:54 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:28:16.512 23:09:54 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:28:16.512 23:09:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:28:16.512 23:09:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:28:16.512 23:09:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:16.512 23:09:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:28:16.771 23:09:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:28:16.771 23:09:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:28:16.771 23:09:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:28:16.771 23:09:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:28:16.771 23:09:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:28:16.771 { 00:28:16.771 "subsystems": [ 00:28:16.771 { 00:28:16.771 "subsystem": "bdev", 00:28:16.771 "config": [ 00:28:16.771 { 00:28:16.771 "params": { 00:28:16.771 "io_mechanism": "io_uring", 00:28:16.771 "conserve_cpu": false, 00:28:16.771 "filename": "/dev/nvme0n1", 00:28:16.771 "name": "xnvme_bdev" 00:28:16.771 }, 00:28:16.771 "method": "bdev_xnvme_create" 00:28:16.771 }, 00:28:16.771 { 00:28:16.771 "method": "bdev_wait_for_examine" 00:28:16.771 } 00:28:16.771 ] 00:28:16.771 } 00:28:16.771 ] 00:28:16.771 } 00:28:16.771 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:28:16.771 fio-3.35 00:28:16.771 Starting 1 thread 00:28:23.358 00:28:23.358 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70462: Mon Dec 9 23:10:00 2024 00:28:23.358 read: IOPS=58.4k, BW=228MiB/s (239MB/s)(1140MiB/5001msec) 00:28:23.358 slat (nsec): min=2875, max=77786, avg=3767.36, stdev=1446.05 00:28:23.358 clat (usec): min=180, max=55117, avg=951.92, stdev=308.88 00:28:23.358 lat (usec): min=186, max=55120, avg=955.69, stdev=309.17 00:28:23.358 clat percentiles (usec): 00:28:23.358 | 1.00th=[ 652], 5.00th=[ 693], 10.00th=[ 717], 20.00th=[ 766], 00:28:23.358 | 30.00th=[ 816], 40.00th=[ 857], 50.00th=[ 898], 60.00th=[ 947], 00:28:23.358 | 70.00th=[ 1012], 80.00th=[ 1090], 90.00th=[ 1221], 95.00th=[ 1369], 00:28:23.358 | 99.00th=[ 1893], 99.50th=[ 2212], 99.90th=[ 3392], 99.95th=[ 4424], 00:28:23.358 | 99.99th=[ 6915] 00:28:23.358 bw ( KiB/s): min=196360, max=253184, per=100.00%, avg=236891.00, stdev=19063.99, samples=9 00:28:23.358 iops : min=49090, max=63296, avg=59222.67, stdev=4765.93, samples=9 00:28:23.358 lat (usec) : 250=0.01%, 500=0.19%, 750=15.97%, 1000=52.15% 00:28:23.358 lat (msec) : 2=30.88%, 4=0.74%, 10=0.06%, 20=0.01%, 50=0.01% 00:28:23.358 lat (msec) : 100=0.01% 00:28:23.358 cpu : usr=39.70%, sys=59.54%, ctx=13, majf=0, minf=762 00:28:23.358 IO depths : 1=1.3%, 2=2.7%, 4=5.8%, 8=12.2%, 16=24.8%, 32=51.6%, >=64=1.7% 00:28:23.358 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.358 complete : 0=0.0%, 4=98.4%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:28:23.358 issued rwts: total=291956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:23.358 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:23.358 00:28:23.358 Run status group 0 (all jobs): 00:28:23.358 READ: bw=228MiB/s (239MB/s), 228MiB/s-228MiB/s (239MB/s-239MB/s), io=1140MiB (1196MB), run=5001-5001msec 00:28:23.358 ----------------------------------------------------- 00:28:23.358 Suppressions used: 00:28:23.358 count bytes template 00:28:23.358 1 11 /usr/src/fio/parse.c 00:28:23.358 1 8 libtcmalloc_minimal.so 00:28:23.358 1 904 libcrypto.so 00:28:23.358 ----------------------------------------------------- 00:28:23.358 00:28:23.358 23:10:01 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:28:23.358 23:10:01 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:28:23.358 23:10:01 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:28:23.358 23:10:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:28:23.358 23:10:01 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:28:23.358 23:10:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:28:23.358 23:10:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:28:23.358 23:10:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:23.358 23:10:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:28:23.358 23:10:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:23.358 23:10:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:28:23.358 23:10:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:28:23.358 23:10:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:28:23.358 23:10:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:23.358 23:10:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:28:23.358 23:10:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:28:23.358 23:10:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:28:23.358 23:10:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:28:23.358 23:10:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:28:23.358 23:10:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:28:23.358 23:10:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:28:23.358 { 00:28:23.358 "subsystems": [ 00:28:23.358 { 00:28:23.358 "subsystem": "bdev", 00:28:23.358 "config": [ 00:28:23.358 { 00:28:23.358 "params": { 00:28:23.358 "io_mechanism": "io_uring", 00:28:23.358 "conserve_cpu": false, 00:28:23.358 "filename": "/dev/nvme0n1", 00:28:23.358 "name": "xnvme_bdev" 00:28:23.358 }, 00:28:23.358 "method": "bdev_xnvme_create" 00:28:23.358 }, 00:28:23.358 { 00:28:23.358 "method": "bdev_wait_for_examine" 00:28:23.358 } 00:28:23.358 ] 00:28:23.358 } 00:28:23.358 ] 00:28:23.358 } 00:28:23.619 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:28:23.619 fio-3.35 00:28:23.619 Starting 1 thread 00:28:30.259 00:28:30.259 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70553: Mon Dec 9 23:10:07 2024 00:28:30.259 write: IOPS=38.3k, BW=150MiB/s (157MB/s)(749MiB/5007msec); 0 zone resets 00:28:30.259 slat (nsec): min=2263, max=56491, avg=4692.05, stdev=1676.71 00:28:30.259 clat (usec): min=84, max=320916, avg=1491.74, stdev=9174.77 00:28:30.259 lat (usec): min=88, max=320920, avg=1496.43, stdev=9174.76 00:28:30.259 clat percentiles (usec): 00:28:30.259 | 1.00th=[ 676], 5.00th=[ 750], 10.00th=[ 807], 20.00th=[ 873], 00:28:30.259 | 30.00th=[ 922], 40.00th=[ 979], 50.00th=[ 1029], 60.00th=[ 1090], 00:28:30.259 | 70.00th=[ 1139], 80.00th=[ 1221], 90.00th=[ 1336], 95.00th=[ 1450], 00:28:30.259 | 99.00th=[ 1811], 99.50th=[ 2409], 99.90th=[173016], 99.95th=[208667], 00:28:30.259 | 99.99th=[308282] 00:28:30.259 bw ( KiB/s): min=55432, max=225208, per=100.00%, avg=153420.80, stdev=57631.44, samples=10 00:28:30.259 iops : min=13858, max=56302, avg=38355.20, stdev=14407.86, samples=10 00:28:30.259 lat (usec) : 100=0.01%, 250=0.05%, 500=0.25%, 750=4.56%, 1000=39.60% 00:28:30.259 lat (msec) : 2=54.81%, 4=0.25%, 10=0.15%, 20=0.03%, 50=0.07% 00:28:30.259 lat (msec) : 100=0.01%, 250=0.18%, 500=0.04% 00:28:30.259 cpu : usr=36.04%, sys=63.24%, ctx=14, majf=0, minf=763 00:28:30.259 IO depths : 1=1.5%, 2=3.1%, 4=6.2%, 8=12.3%, 16=24.8%, 32=50.5%, >=64=1.6% 00:28:30.259 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:30.259 complete : 0=0.0%, 4=98.4%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:28:30.259 issued rwts: total=0,191839,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:30.259 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:30.259 00:28:30.259 Run status group 0 (all jobs): 00:28:30.259 WRITE: bw=150MiB/s (157MB/s), 150MiB/s-150MiB/s (157MB/s-157MB/s), io=749MiB (786MB), run=5007-5007msec 00:28:30.259 ----------------------------------------------------- 00:28:30.259 Suppressions used: 00:28:30.259 count bytes template 00:28:30.259 1 11 /usr/src/fio/parse.c 00:28:30.259 1 8 libtcmalloc_minimal.so 00:28:30.259 1 904 libcrypto.so 00:28:30.259 ----------------------------------------------------- 00:28:30.259 00:28:30.259 00:28:30.259 real 0m13.485s 00:28:30.259 user 0m6.485s 00:28:30.259 sys 0m6.605s 00:28:30.259 23:10:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:30.259 23:10:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:28:30.259 ************************************ 00:28:30.259 END TEST xnvme_fio_plugin 00:28:30.259 ************************************ 00:28:30.259 23:10:08 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:28:30.259 23:10:08 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:28:30.259 23:10:08 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:28:30.259 23:10:08 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:28:30.259 23:10:08 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:30.259 23:10:08 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:30.259 23:10:08 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:28:30.259 ************************************ 00:28:30.259 START TEST xnvme_rpc 00:28:30.259 ************************************ 00:28:30.259 23:10:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:28:30.259 23:10:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:28:30.259 23:10:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:28:30.259 23:10:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:28:30.259 23:10:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:28:30.259 23:10:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=70644 00:28:30.259 23:10:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:30.259 23:10:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 70644 00:28:30.259 23:10:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 70644 ']' 00:28:30.259 23:10:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:30.259 23:10:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:30.259 23:10:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:30.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:30.259 23:10:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:30.259 23:10:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:28:30.259 [2024-12-09 23:10:08.683553] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:28:30.259 [2024-12-09 23:10:08.683925] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70644 ] 00:28:30.520 [2024-12-09 23:10:08.860200] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:30.520 [2024-12-09 23:10:08.959804] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:31.092 23:10:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:31.092 23:10:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:28:31.092 23:10:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring -c 00:28:31.092 23:10:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.092 23:10:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:28:31.092 xnvme_bdev 00:28:31.092 23:10:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.092 23:10:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:28:31.092 23:10:09 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:28:31.092 23:10:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.092 23:10:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:28:31.092 23:10:09 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:28:31.092 23:10:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.353 23:10:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:28:31.353 23:10:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:28:31.353 23:10:09 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:28:31.353 23:10:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.353 23:10:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:28:31.353 23:10:09 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:28:31.353 23:10:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.353 23:10:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:28:31.353 23:10:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:28:31.353 23:10:09 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:28:31.353 23:10:09 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:28:31.353 23:10:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.353 23:10:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:28:31.353 23:10:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.353 23:10:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:28:31.353 23:10:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:28:31.353 23:10:09 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:28:31.353 23:10:09 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:28:31.353 23:10:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.353 23:10:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:28:31.353 23:10:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.353 23:10:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:28:31.353 23:10:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:28:31.353 23:10:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.353 23:10:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:28:31.353 23:10:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.353 23:10:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 70644 00:28:31.353 23:10:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 70644 ']' 00:28:31.353 23:10:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 70644 00:28:31.353 23:10:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:28:31.353 23:10:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:31.353 23:10:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70644 00:28:31.353 killing process with pid 70644 00:28:31.353 23:10:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:31.353 23:10:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:31.353 23:10:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70644' 00:28:31.353 23:10:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 70644 00:28:31.353 23:10:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 70644 00:28:32.745 ************************************ 00:28:32.745 END TEST xnvme_rpc 00:28:32.745 ************************************ 00:28:32.745 00:28:32.745 real 0m2.344s 00:28:32.745 user 0m2.494s 00:28:32.745 sys 0m0.375s 00:28:32.745 23:10:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:32.745 23:10:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:28:32.745 23:10:10 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:28:32.746 23:10:10 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:32.746 23:10:10 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:32.746 23:10:10 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:28:32.746 ************************************ 00:28:32.746 START TEST xnvme_bdevperf 00:28:32.746 ************************************ 00:28:32.746 23:10:10 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:28:32.746 23:10:10 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:28:32.746 23:10:10 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:28:32.746 23:10:10 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:28:32.746 23:10:10 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:28:32.746 23:10:10 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:28:32.746 23:10:10 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:28:32.746 23:10:10 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:32.746 { 00:28:32.746 "subsystems": [ 00:28:32.746 { 00:28:32.746 "subsystem": "bdev", 00:28:32.746 "config": [ 00:28:32.746 { 00:28:32.746 "params": { 00:28:32.746 "io_mechanism": "io_uring", 00:28:32.746 "conserve_cpu": true, 00:28:32.746 "filename": "/dev/nvme0n1", 00:28:32.746 "name": "xnvme_bdev" 00:28:32.746 }, 00:28:32.746 "method": "bdev_xnvme_create" 00:28:32.746 }, 00:28:32.746 { 00:28:32.746 "method": "bdev_wait_for_examine" 00:28:32.746 } 00:28:32.746 ] 00:28:32.746 } 00:28:32.746 ] 00:28:32.746 } 00:28:32.746 [2024-12-09 23:10:11.048028] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:28:32.746 [2024-12-09 23:10:11.048387] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70713 ] 00:28:33.018 [2024-12-09 23:10:11.219122] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:33.018 [2024-12-09 23:10:11.304569] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:33.282 Running I/O for 5 seconds... 00:28:35.163 55996.00 IOPS, 218.73 MiB/s [2024-12-09T23:10:14.614Z] 55717.00 IOPS, 217.64 MiB/s [2024-12-09T23:10:15.571Z] 56100.33 IOPS, 219.14 MiB/s [2024-12-09T23:10:16.518Z] 56099.00 IOPS, 219.14 MiB/s [2024-12-09T23:10:16.518Z] 55494.60 IOPS, 216.78 MiB/s 00:28:38.056 Latency(us) 00:28:38.056 [2024-12-09T23:10:16.518Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:38.056 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:28:38.056 xnvme_bdev : 5.01 55447.35 216.59 0.00 0.00 1149.89 304.05 35691.91 00:28:38.056 [2024-12-09T23:10:16.518Z] =================================================================================================================== 00:28:38.056 [2024-12-09T23:10:16.518Z] Total : 55447.35 216.59 0.00 0.00 1149.89 304.05 35691.91 00:28:38.999 23:10:17 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:28:38.999 23:10:17 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:28:38.999 23:10:17 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:28:38.999 23:10:17 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:28:38.999 23:10:17 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:38.999 { 00:28:38.999 "subsystems": [ 00:28:38.999 { 00:28:38.999 "subsystem": "bdev", 00:28:38.999 "config": [ 00:28:38.999 { 00:28:38.999 "params": { 00:28:38.999 "io_mechanism": "io_uring", 00:28:38.999 "conserve_cpu": true, 00:28:38.999 "filename": "/dev/nvme0n1", 00:28:38.999 "name": "xnvme_bdev" 00:28:38.999 }, 00:28:38.999 "method": "bdev_xnvme_create" 00:28:38.999 }, 00:28:38.999 { 00:28:38.999 "method": "bdev_wait_for_examine" 00:28:38.999 } 00:28:38.999 ] 00:28:38.999 } 00:28:38.999 ] 00:28:38.999 } 00:28:38.999 [2024-12-09 23:10:17.369729] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:28:38.999 [2024-12-09 23:10:17.369853] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70788 ] 00:28:39.260 [2024-12-09 23:10:17.530670] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:39.260 [2024-12-09 23:10:17.630974] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:39.521 Running I/O for 5 seconds... 00:28:41.509 20993.00 IOPS, 82.00 MiB/s [2024-12-09T23:10:20.909Z] 22230.50 IOPS, 86.84 MiB/s [2024-12-09T23:10:22.291Z] 21505.67 IOPS, 84.01 MiB/s [2024-12-09T23:10:23.236Z] 22102.75 IOPS, 86.34 MiB/s [2024-12-09T23:10:23.236Z] 22340.40 IOPS, 87.27 MiB/s 00:28:44.774 Latency(us) 00:28:44.774 [2024-12-09T23:10:23.236Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:44.774 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:28:44.774 xnvme_bdev : 5.00 22336.89 87.25 0.00 0.00 2859.85 47.26 51218.90 00:28:44.774 [2024-12-09T23:10:23.236Z] =================================================================================================================== 00:28:44.774 [2024-12-09T23:10:23.236Z] Total : 22336.89 87.25 0.00 0.00 2859.85 47.26 51218.90 00:28:45.345 00:28:45.345 real 0m12.638s 00:28:45.345 user 0m7.759s 00:28:45.346 sys 0m3.876s 00:28:45.346 ************************************ 00:28:45.346 END TEST xnvme_bdevperf 00:28:45.346 ************************************ 00:28:45.346 23:10:23 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:45.346 23:10:23 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:45.346 23:10:23 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:28:45.346 23:10:23 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:45.346 23:10:23 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:45.346 23:10:23 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:28:45.346 ************************************ 00:28:45.346 START TEST xnvme_fio_plugin 00:28:45.346 ************************************ 00:28:45.346 23:10:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:28:45.346 23:10:23 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:28:45.346 23:10:23 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:28:45.346 23:10:23 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:28:45.346 23:10:23 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:28:45.346 23:10:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:28:45.346 23:10:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:28:45.346 23:10:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:45.346 23:10:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:28:45.346 23:10:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:45.346 23:10:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:28:45.346 23:10:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:28:45.346 23:10:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:28:45.346 23:10:23 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:28:45.346 23:10:23 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:28:45.346 23:10:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:28:45.346 23:10:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:45.346 23:10:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:28:45.346 23:10:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:28:45.346 23:10:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:28:45.346 23:10:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:28:45.346 23:10:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:28:45.346 23:10:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:28:45.346 23:10:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:28:45.346 { 00:28:45.346 "subsystems": [ 00:28:45.346 { 00:28:45.346 "subsystem": "bdev", 00:28:45.346 "config": [ 00:28:45.346 { 00:28:45.346 "params": { 00:28:45.346 "io_mechanism": "io_uring", 00:28:45.346 "conserve_cpu": true, 00:28:45.346 "filename": "/dev/nvme0n1", 00:28:45.346 "name": "xnvme_bdev" 00:28:45.346 }, 00:28:45.346 "method": "bdev_xnvme_create" 00:28:45.346 }, 00:28:45.346 { 00:28:45.346 "method": "bdev_wait_for_examine" 00:28:45.346 } 00:28:45.346 ] 00:28:45.346 } 00:28:45.346 ] 00:28:45.346 } 00:28:45.606 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:28:45.606 fio-3.35 00:28:45.606 Starting 1 thread 00:28:52.245 00:28:52.245 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70902: Mon Dec 9 23:10:29 2024 00:28:52.245 read: IOPS=59.2k, BW=231MiB/s (243MB/s)(1157MiB/5001msec) 00:28:52.245 slat (usec): min=2, max=775, avg= 3.73, stdev= 3.18 00:28:52.245 clat (usec): min=607, max=5171, avg=934.40, stdev=214.89 00:28:52.245 lat (usec): min=610, max=5175, avg=938.13, stdev=215.33 00:28:52.245 clat percentiles (usec): 00:28:52.245 | 1.00th=[ 668], 5.00th=[ 701], 10.00th=[ 725], 20.00th=[ 766], 00:28:52.245 | 30.00th=[ 807], 40.00th=[ 840], 50.00th=[ 881], 60.00th=[ 922], 00:28:52.245 | 70.00th=[ 988], 80.00th=[ 1074], 90.00th=[ 1221], 95.00th=[ 1352], 00:28:52.245 | 99.00th=[ 1614], 99.50th=[ 1762], 99.90th=[ 2114], 99.95th=[ 2573], 00:28:52.245 | 99.99th=[ 3687] 00:28:52.245 bw ( KiB/s): min=188416, max=254464, per=99.49%, avg=235797.33, stdev=21691.34, samples=9 00:28:52.245 iops : min=47104, max=63616, avg=58949.33, stdev=5422.84, samples=9 00:28:52.245 lat (usec) : 750=15.89%, 1000=55.64% 00:28:52.245 lat (msec) : 2=28.33%, 4=0.13%, 10=0.01% 00:28:52.245 cpu : usr=44.62%, sys=52.00%, ctx=19, majf=0, minf=762 00:28:52.245 IO depths : 1=1.5%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.1%, >=64=1.6% 00:28:52.245 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:52.245 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:28:52.245 issued rwts: total=296309,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:52.245 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:52.245 00:28:52.245 Run status group 0 (all jobs): 00:28:52.245 READ: bw=231MiB/s (243MB/s), 231MiB/s-231MiB/s (243MB/s-243MB/s), io=1157MiB (1214MB), run=5001-5001msec 00:28:52.245 ----------------------------------------------------- 00:28:52.245 Suppressions used: 00:28:52.245 count bytes template 00:28:52.245 1 11 /usr/src/fio/parse.c 00:28:52.245 1 8 libtcmalloc_minimal.so 00:28:52.245 1 904 libcrypto.so 00:28:52.245 ----------------------------------------------------- 00:28:52.245 00:28:52.245 23:10:30 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:28:52.245 23:10:30 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:28:52.245 23:10:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:28:52.245 23:10:30 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:28:52.245 23:10:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:28:52.245 23:10:30 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:28:52.245 23:10:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:52.245 23:10:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:28:52.245 23:10:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:28:52.245 23:10:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:52.245 23:10:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:28:52.245 23:10:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:28:52.245 23:10:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:28:52.245 23:10:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:52.245 23:10:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:28:52.245 23:10:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:28:52.245 23:10:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:28:52.245 23:10:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:28:52.245 23:10:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:28:52.245 23:10:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:28:52.245 23:10:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:28:52.245 { 00:28:52.245 "subsystems": [ 00:28:52.245 { 00:28:52.245 "subsystem": "bdev", 00:28:52.245 "config": [ 00:28:52.245 { 00:28:52.245 "params": { 00:28:52.245 "io_mechanism": "io_uring", 00:28:52.245 "conserve_cpu": true, 00:28:52.245 "filename": "/dev/nvme0n1", 00:28:52.245 "name": "xnvme_bdev" 00:28:52.245 }, 00:28:52.245 "method": "bdev_xnvme_create" 00:28:52.245 }, 00:28:52.245 { 00:28:52.245 "method": "bdev_wait_for_examine" 00:28:52.245 } 00:28:52.245 ] 00:28:52.245 } 00:28:52.245 ] 00:28:52.245 } 00:28:52.245 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:28:52.245 fio-3.35 00:28:52.245 Starting 1 thread 00:28:58.825 00:28:58.825 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70993: Mon Dec 9 23:10:36 2024 00:28:58.825 write: IOPS=47.3k, BW=185MiB/s (194MB/s)(961MiB/5201msec); 0 zone resets 00:28:58.825 slat (usec): min=2, max=563, avg= 4.25, stdev= 4.99 00:28:58.825 clat (usec): min=40, max=228973, avg=1203.99, stdev=4390.88 00:28:58.825 lat (usec): min=43, max=228977, avg=1208.24, stdev=4390.96 00:28:58.825 clat percentiles (usec): 00:28:58.826 | 1.00th=[ 235], 5.00th=[ 660], 10.00th=[ 709], 20.00th=[ 758], 00:28:58.826 | 30.00th=[ 807], 40.00th=[ 857], 50.00th=[ 906], 60.00th=[ 963], 00:28:58.826 | 70.00th=[ 1045], 80.00th=[ 1139], 90.00th=[ 1450], 95.00th=[ 2245], 00:28:58.826 | 99.00th=[ 4555], 99.50th=[ 5473], 99.90th=[ 34341], 99.95th=[ 94897], 00:28:58.826 | 99.99th=[229639] 00:28:58.826 bw ( KiB/s): min=107640, max=238592, per=100.00%, avg=196574.90, stdev=47438.95, samples=10 00:28:58.826 iops : min=26910, max=59648, avg=49143.70, stdev=11859.74, samples=10 00:28:58.826 lat (usec) : 50=0.01%, 100=0.20%, 250=0.89%, 500=1.89%, 750=15.09% 00:28:58.826 lat (usec) : 1000=46.48% 00:28:58.826 lat (msec) : 2=29.73%, 4=4.20%, 10=1.28%, 20=0.04%, 50=0.10% 00:28:58.826 lat (msec) : 100=0.05%, 250=0.03% 00:28:58.826 cpu : usr=48.13%, sys=44.88%, ctx=33, majf=0, minf=763 00:28:58.826 IO depths : 1=1.3%, 2=2.7%, 4=5.4%, 8=10.9%, 16=22.6%, 32=54.6%, >=64=2.5% 00:28:58.826 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:58.826 complete : 0=0.0%, 4=98.0%, 8=0.2%, 16=0.2%, 32=0.2%, 64=1.4%, >=64=0.0% 00:28:58.826 issued rwts: total=0,245926,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:58.826 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:58.826 00:28:58.826 Run status group 0 (all jobs): 00:28:58.826 WRITE: bw=185MiB/s (194MB/s), 185MiB/s-185MiB/s (194MB/s-194MB/s), io=961MiB (1007MB), run=5201-5201msec 00:28:59.085 ----------------------------------------------------- 00:28:59.085 Suppressions used: 00:28:59.085 count bytes template 00:28:59.085 1 11 /usr/src/fio/parse.c 00:28:59.085 1 8 libtcmalloc_minimal.so 00:28:59.085 1 904 libcrypto.so 00:28:59.085 ----------------------------------------------------- 00:28:59.085 00:28:59.085 00:28:59.085 real 0m13.726s 00:28:59.085 user 0m7.460s 00:28:59.085 sys 0m5.427s 00:28:59.086 23:10:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:59.086 23:10:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:28:59.086 ************************************ 00:28:59.086 END TEST xnvme_fio_plugin 00:28:59.086 ************************************ 00:28:59.086 23:10:37 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:28:59.086 23:10:37 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring_cmd 00:28:59.086 23:10:37 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/ng0n1 00:28:59.086 23:10:37 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/ng0n1 00:28:59.086 23:10:37 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:28:59.086 23:10:37 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:28:59.086 23:10:37 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:28:59.086 23:10:37 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:28:59.086 23:10:37 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:28:59.086 23:10:37 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:59.086 23:10:37 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:59.086 23:10:37 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:28:59.086 ************************************ 00:28:59.086 START TEST xnvme_rpc 00:28:59.086 ************************************ 00:28:59.086 23:10:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:28:59.086 23:10:37 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:28:59.086 23:10:37 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:28:59.086 23:10:37 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:28:59.086 23:10:37 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:28:59.086 23:10:37 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=71075 00:28:59.086 23:10:37 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 71075 00:28:59.086 23:10:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 71075 ']' 00:28:59.086 23:10:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:59.086 23:10:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:59.086 23:10:37 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:59.086 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:59.086 23:10:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:59.086 23:10:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:59.086 23:10:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:28:59.086 [2024-12-09 23:10:37.496319] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:28:59.086 [2024-12-09 23:10:37.496546] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71075 ] 00:28:59.347 [2024-12-09 23:10:37.651887] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:59.347 [2024-12-09 23:10:37.735541] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:59.917 23:10:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:59.917 23:10:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:28:59.917 23:10:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd '' 00:28:59.917 23:10:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.917 23:10:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:29:00.180 xnvme_bdev 00:29:00.180 23:10:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.180 23:10:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:29:00.180 23:10:38 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:29:00.180 23:10:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.180 23:10:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:29:00.180 23:10:38 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:29:00.180 23:10:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.180 23:10:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:29:00.180 23:10:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:29:00.180 23:10:38 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:29:00.180 23:10:38 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:29:00.180 23:10:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.180 23:10:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:29:00.180 23:10:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.180 23:10:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:29:00.180 23:10:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:29:00.180 23:10:38 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:29:00.180 23:10:38 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:29:00.180 23:10:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.180 23:10:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:29:00.180 23:10:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.180 23:10:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:29:00.180 23:10:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:29:00.180 23:10:38 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:29:00.180 23:10:38 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:29:00.180 23:10:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.180 23:10:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:29:00.180 23:10:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.180 23:10:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:29:00.180 23:10:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:29:00.180 23:10:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.180 23:10:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:29:00.180 23:10:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.180 23:10:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 71075 00:29:00.180 23:10:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 71075 ']' 00:29:00.180 23:10:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 71075 00:29:00.180 23:10:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:29:00.180 23:10:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:00.180 23:10:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71075 00:29:00.180 killing process with pid 71075 00:29:00.180 23:10:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:00.180 23:10:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:00.180 23:10:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71075' 00:29:00.180 23:10:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 71075 00:29:00.180 23:10:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 71075 00:29:01.657 00:29:01.657 real 0m2.352s 00:29:01.657 user 0m2.508s 00:29:01.657 sys 0m0.345s 00:29:01.657 ************************************ 00:29:01.657 END TEST xnvme_rpc 00:29:01.657 ************************************ 00:29:01.657 23:10:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:01.657 23:10:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:29:01.657 23:10:39 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:29:01.657 23:10:39 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:01.657 23:10:39 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:01.657 23:10:39 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:29:01.657 ************************************ 00:29:01.657 START TEST xnvme_bdevperf 00:29:01.657 ************************************ 00:29:01.657 23:10:39 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:29:01.657 23:10:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:29:01.657 23:10:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:29:01.657 23:10:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:29:01.657 23:10:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:29:01.657 23:10:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:29:01.657 23:10:39 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:29:01.657 23:10:39 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:01.657 { 00:29:01.657 "subsystems": [ 00:29:01.657 { 00:29:01.657 "subsystem": "bdev", 00:29:01.657 "config": [ 00:29:01.657 { 00:29:01.657 "params": { 00:29:01.657 "io_mechanism": "io_uring_cmd", 00:29:01.657 "conserve_cpu": false, 00:29:01.657 "filename": "/dev/ng0n1", 00:29:01.657 "name": "xnvme_bdev" 00:29:01.657 }, 00:29:01.657 "method": "bdev_xnvme_create" 00:29:01.657 }, 00:29:01.657 { 00:29:01.657 "method": "bdev_wait_for_examine" 00:29:01.657 } 00:29:01.657 ] 00:29:01.657 } 00:29:01.657 ] 00:29:01.657 } 00:29:01.657 [2024-12-09 23:10:39.868022] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:29:01.657 [2024-12-09 23:10:39.868259] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71148 ] 00:29:01.657 [2024-12-09 23:10:40.021285] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:01.919 [2024-12-09 23:10:40.123049] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:01.919 Running I/O for 5 seconds... 00:29:04.250 62039.00 IOPS, 242.34 MiB/s [2024-12-09T23:10:43.668Z] 60636.50 IOPS, 236.86 MiB/s [2024-12-09T23:10:44.610Z] 60000.33 IOPS, 234.38 MiB/s [2024-12-09T23:10:45.557Z] 59622.50 IOPS, 232.90 MiB/s [2024-12-09T23:10:45.557Z] 58877.00 IOPS, 229.99 MiB/s 00:29:07.095 Latency(us) 00:29:07.095 [2024-12-09T23:10:45.557Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:07.095 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:29:07.095 xnvme_bdev : 5.01 58825.15 229.79 0.00 0.00 1083.64 354.46 7914.73 00:29:07.095 [2024-12-09T23:10:45.557Z] =================================================================================================================== 00:29:07.095 [2024-12-09T23:10:45.557Z] Total : 58825.15 229.79 0.00 0.00 1083.64 354.46 7914.73 00:29:07.666 23:10:46 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:29:07.927 23:10:46 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:29:07.927 23:10:46 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:29:07.927 23:10:46 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:07.927 23:10:46 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:29:07.927 { 00:29:07.927 "subsystems": [ 00:29:07.927 { 00:29:07.927 "subsystem": "bdev", 00:29:07.927 "config": [ 00:29:07.927 { 00:29:07.927 "params": { 00:29:07.927 "io_mechanism": "io_uring_cmd", 00:29:07.927 "conserve_cpu": false, 00:29:07.927 "filename": "/dev/ng0n1", 00:29:07.927 "name": "xnvme_bdev" 00:29:07.927 }, 00:29:07.927 "method": "bdev_xnvme_create" 00:29:07.927 }, 00:29:07.927 { 00:29:07.927 "method": "bdev_wait_for_examine" 00:29:07.927 } 00:29:07.927 ] 00:29:07.927 } 00:29:07.927 ] 00:29:07.927 } 00:29:07.927 [2024-12-09 23:10:46.192434] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:29:07.927 [2024-12-09 23:10:46.192552] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71217 ] 00:29:07.927 [2024-12-09 23:10:46.354356] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:08.187 [2024-12-09 23:10:46.454289] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:08.448 Running I/O for 5 seconds... 00:29:10.332 35138.00 IOPS, 137.26 MiB/s [2024-12-09T23:10:49.741Z] 34284.50 IOPS, 133.92 MiB/s [2024-12-09T23:10:51.130Z] 36397.00 IOPS, 142.18 MiB/s [2024-12-09T23:10:51.702Z] 37410.75 IOPS, 146.14 MiB/s [2024-12-09T23:10:51.962Z] 36663.80 IOPS, 143.22 MiB/s 00:29:13.500 Latency(us) 00:29:13.500 [2024-12-09T23:10:51.962Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:13.500 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:29:13.500 xnvme_bdev : 5.03 36486.00 142.52 0.00 0.00 1749.24 47.66 161319.38 00:29:13.500 [2024-12-09T23:10:51.962Z] =================================================================================================================== 00:29:13.500 [2024-12-09T23:10:51.962Z] Total : 36486.00 142.52 0.00 0.00 1749.24 47.66 161319.38 00:29:14.071 23:10:52 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:29:14.071 23:10:52 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:29:14.071 23:10:52 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:29:14.071 23:10:52 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:29:14.071 23:10:52 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:14.071 { 00:29:14.071 "subsystems": [ 00:29:14.071 { 00:29:14.071 "subsystem": "bdev", 00:29:14.071 "config": [ 00:29:14.071 { 00:29:14.071 "params": { 00:29:14.071 "io_mechanism": "io_uring_cmd", 00:29:14.071 "conserve_cpu": false, 00:29:14.071 "filename": "/dev/ng0n1", 00:29:14.071 "name": "xnvme_bdev" 00:29:14.071 }, 00:29:14.071 "method": "bdev_xnvme_create" 00:29:14.071 }, 00:29:14.071 { 00:29:14.071 "method": "bdev_wait_for_examine" 00:29:14.071 } 00:29:14.071 ] 00:29:14.071 } 00:29:14.071 ] 00:29:14.071 } 00:29:14.071 [2024-12-09 23:10:52.496997] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:29:14.072 [2024-12-09 23:10:52.497324] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71291 ] 00:29:14.337 [2024-12-09 23:10:52.657146] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:14.337 [2024-12-09 23:10:52.758835] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:14.602 Running I/O for 5 seconds... 00:29:16.627 90560.00 IOPS, 353.75 MiB/s [2024-12-09T23:10:56.047Z] 88096.00 IOPS, 344.12 MiB/s [2024-12-09T23:10:57.433Z] 87530.67 IOPS, 341.92 MiB/s [2024-12-09T23:10:58.379Z] 87952.00 IOPS, 343.56 MiB/s 00:29:19.917 Latency(us) 00:29:19.917 [2024-12-09T23:10:58.379Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:19.917 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:29:19.917 xnvme_bdev : 5.00 87928.35 343.47 0.00 0.00 724.27 444.26 2482.81 00:29:19.917 [2024-12-09T23:10:58.379Z] =================================================================================================================== 00:29:19.917 [2024-12-09T23:10:58.379Z] Total : 87928.35 343.47 0.00 0.00 724.27 444.26 2482.81 00:29:20.490 23:10:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:29:20.490 23:10:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:29:20.490 23:10:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:29:20.490 23:10:58 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:29:20.490 23:10:58 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:20.490 { 00:29:20.490 "subsystems": [ 00:29:20.490 { 00:29:20.490 "subsystem": "bdev", 00:29:20.490 "config": [ 00:29:20.490 { 00:29:20.490 "params": { 00:29:20.490 "io_mechanism": "io_uring_cmd", 00:29:20.490 "conserve_cpu": false, 00:29:20.490 "filename": "/dev/ng0n1", 00:29:20.490 "name": "xnvme_bdev" 00:29:20.490 }, 00:29:20.490 "method": "bdev_xnvme_create" 00:29:20.490 }, 00:29:20.490 { 00:29:20.490 "method": "bdev_wait_for_examine" 00:29:20.490 } 00:29:20.490 ] 00:29:20.490 } 00:29:20.490 ] 00:29:20.490 } 00:29:20.490 [2024-12-09 23:10:58.823143] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:29:20.490 [2024-12-09 23:10:58.823280] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71365 ] 00:29:20.754 [2024-12-09 23:10:58.980386] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:20.754 [2024-12-09 23:10:59.083411] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:21.015 Running I/O for 5 seconds... 00:29:22.960 595.00 IOPS, 2.32 MiB/s [2024-12-09T23:11:02.366Z] 405.50 IOPS, 1.58 MiB/s [2024-12-09T23:11:03.754Z] 468.67 IOPS, 1.83 MiB/s [2024-12-09T23:11:04.699Z] 500.25 IOPS, 1.95 MiB/s [2024-12-09T23:11:04.699Z] 522.80 IOPS, 2.04 MiB/s 00:29:26.237 Latency(us) 00:29:26.237 [2024-12-09T23:11:04.699Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:26.237 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:29:26.237 xnvme_bdev : 5.08 527.28 2.06 0.00 0.00 120331.93 84.28 851766.35 00:29:26.237 [2024-12-09T23:11:04.699Z] =================================================================================================================== 00:29:26.237 [2024-12-09T23:11:04.699Z] Total : 527.28 2.06 0.00 0.00 120331.93 84.28 851766.35 00:29:26.809 00:29:26.809 real 0m25.176s 00:29:26.809 user 0m14.170s 00:29:26.809 sys 0m10.611s 00:29:26.809 23:11:04 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:26.809 23:11:04 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:26.809 ************************************ 00:29:26.809 END TEST xnvme_bdevperf 00:29:26.809 ************************************ 00:29:26.809 23:11:05 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:29:26.809 23:11:05 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:26.809 23:11:05 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:26.809 23:11:05 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:29:26.809 ************************************ 00:29:26.809 START TEST xnvme_fio_plugin 00:29:26.809 ************************************ 00:29:26.809 23:11:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:29:26.809 23:11:05 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:29:26.809 23:11:05 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:29:26.809 23:11:05 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:29:26.810 23:11:05 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:29:26.810 23:11:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:29:26.810 23:11:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:29:26.810 23:11:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:26.810 23:11:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:29:26.810 23:11:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:26.810 23:11:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:29:26.810 23:11:05 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:29:26.810 23:11:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:29:26.810 23:11:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:29:26.810 23:11:05 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:29:26.810 23:11:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:29:26.810 23:11:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:26.810 23:11:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:29:26.810 23:11:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:29:26.810 23:11:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:29:26.810 23:11:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:29:26.810 23:11:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:29:26.810 23:11:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:29:26.810 23:11:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:29:26.810 { 00:29:26.810 "subsystems": [ 00:29:26.810 { 00:29:26.810 "subsystem": "bdev", 00:29:26.810 "config": [ 00:29:26.810 { 00:29:26.810 "params": { 00:29:26.810 "io_mechanism": "io_uring_cmd", 00:29:26.810 "conserve_cpu": false, 00:29:26.810 "filename": "/dev/ng0n1", 00:29:26.810 "name": "xnvme_bdev" 00:29:26.810 }, 00:29:26.810 "method": "bdev_xnvme_create" 00:29:26.810 }, 00:29:26.810 { 00:29:26.810 "method": "bdev_wait_for_examine" 00:29:26.810 } 00:29:26.810 ] 00:29:26.810 } 00:29:26.810 ] 00:29:26.810 } 00:29:26.810 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:29:26.810 fio-3.35 00:29:26.810 Starting 1 thread 00:29:33.397 00:29:33.397 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71478: Mon Dec 9 23:11:10 2024 00:29:33.397 read: IOPS=50.3k, BW=196MiB/s (206MB/s)(982MiB/5001msec) 00:29:33.397 slat (nsec): min=2208, max=62348, avg=4294.28, stdev=2160.77 00:29:33.397 clat (usec): min=425, max=3915, avg=1108.29, stdev=253.72 00:29:33.398 lat (usec): min=428, max=3923, avg=1112.58, stdev=254.33 00:29:33.398 clat percentiles (usec): 00:29:33.398 | 1.00th=[ 676], 5.00th=[ 758], 10.00th=[ 824], 20.00th=[ 898], 00:29:33.398 | 30.00th=[ 955], 40.00th=[ 1012], 50.00th=[ 1074], 60.00th=[ 1139], 00:29:33.398 | 70.00th=[ 1221], 80.00th=[ 1303], 90.00th=[ 1434], 95.00th=[ 1565], 00:29:33.398 | 99.00th=[ 1876], 99.50th=[ 1991], 99.90th=[ 2245], 99.95th=[ 2409], 00:29:33.398 | 99.99th=[ 3720] 00:29:33.398 bw ( KiB/s): min=176640, max=221184, per=99.78%, avg=200681.78, stdev=15349.52, samples=9 00:29:33.398 iops : min=44160, max=55296, avg=50170.44, stdev=3837.38, samples=9 00:29:33.398 lat (usec) : 500=0.03%, 750=4.34%, 1000=33.02% 00:29:33.398 lat (msec) : 2=62.16%, 4=0.45% 00:29:33.398 cpu : usr=41.78%, sys=57.46%, ctx=12, majf=0, minf=762 00:29:33.398 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:29:33.398 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:33.398 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:29:33.398 issued rwts: total=251463,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:33.398 latency : target=0, window=0, percentile=100.00%, depth=64 00:29:33.398 00:29:33.398 Run status group 0 (all jobs): 00:29:33.398 READ: bw=196MiB/s (206MB/s), 196MiB/s-196MiB/s (206MB/s-206MB/s), io=982MiB (1030MB), run=5001-5001msec 00:29:33.398 ----------------------------------------------------- 00:29:33.398 Suppressions used: 00:29:33.398 count bytes template 00:29:33.398 1 11 /usr/src/fio/parse.c 00:29:33.398 1 8 libtcmalloc_minimal.so 00:29:33.398 1 904 libcrypto.so 00:29:33.398 ----------------------------------------------------- 00:29:33.398 00:29:33.398 23:11:11 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:29:33.398 23:11:11 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:29:33.398 23:11:11 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:29:33.398 23:11:11 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:29:33.398 23:11:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:29:33.398 23:11:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:29:33.398 23:11:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:29:33.398 23:11:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:33.398 23:11:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:29:33.398 23:11:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:33.398 23:11:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:29:33.398 23:11:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:29:33.398 23:11:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:29:33.398 23:11:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:29:33.398 23:11:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:29:33.398 23:11:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:33.398 23:11:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:29:33.398 23:11:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:29:33.398 23:11:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:29:33.398 23:11:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:29:33.398 23:11:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:29:33.398 { 00:29:33.398 "subsystems": [ 00:29:33.398 { 00:29:33.398 "subsystem": "bdev", 00:29:33.398 "config": [ 00:29:33.398 { 00:29:33.398 "params": { 00:29:33.398 "io_mechanism": "io_uring_cmd", 00:29:33.398 "conserve_cpu": false, 00:29:33.398 "filename": "/dev/ng0n1", 00:29:33.398 "name": "xnvme_bdev" 00:29:33.398 }, 00:29:33.398 "method": "bdev_xnvme_create" 00:29:33.398 }, 00:29:33.398 { 00:29:33.398 "method": "bdev_wait_for_examine" 00:29:33.398 } 00:29:33.398 ] 00:29:33.398 } 00:29:33.398 ] 00:29:33.398 } 00:29:33.658 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:29:33.658 fio-3.35 00:29:33.658 Starting 1 thread 00:29:40.246 00:29:40.246 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71569: Mon Dec 9 23:11:17 2024 00:29:40.246 write: IOPS=41.5k, BW=162MiB/s (170MB/s)(811MiB/5002msec); 0 zone resets 00:29:40.246 slat (nsec): min=2222, max=70805, avg=4061.03, stdev=1767.91 00:29:40.246 clat (usec): min=43, max=198666, avg=1397.75, stdev=4654.51 00:29:40.246 lat (usec): min=49, max=198670, avg=1401.82, stdev=4654.53 00:29:40.246 clat percentiles (usec): 00:29:40.247 | 1.00th=[ 396], 5.00th=[ 676], 10.00th=[ 775], 20.00th=[ 898], 00:29:40.247 | 30.00th=[ 979], 40.00th=[ 1057], 50.00th=[ 1139], 60.00th=[ 1205], 00:29:40.247 | 70.00th=[ 1303], 80.00th=[ 1434], 90.00th=[ 1680], 95.00th=[ 2008], 00:29:40.247 | 99.00th=[ 5211], 99.50th=[ 6652], 99.90th=[ 38011], 99.95th=[166724], 00:29:40.247 | 99.99th=[198181] 00:29:40.247 bw ( KiB/s): min=17752, max=221208, per=100.00%, avg=167582.22, stdev=59599.20, samples=9 00:29:40.247 iops : min= 4438, max=55302, avg=41895.56, stdev=14899.80, samples=9 00:29:40.247 lat (usec) : 50=0.01%, 100=0.03%, 250=0.21%, 500=1.69%, 750=6.53% 00:29:40.247 lat (usec) : 1000=24.09% 00:29:40.247 lat (msec) : 2=62.41%, 4=3.43%, 10=1.45%, 20=0.02%, 50=0.04% 00:29:40.247 lat (msec) : 100=0.03%, 250=0.06% 00:29:40.247 cpu : usr=37.77%, sys=61.41%, ctx=9, majf=0, minf=763 00:29:40.247 IO depths : 1=1.2%, 2=2.4%, 4=4.9%, 8=10.5%, 16=23.3%, 32=55.6%, >=64=2.1% 00:29:40.247 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:40.247 complete : 0=0.0%, 4=98.1%, 8=0.1%, 16=0.1%, 32=0.2%, 64=1.5%, >=64=0.0% 00:29:40.247 issued rwts: total=0,207590,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:40.247 latency : target=0, window=0, percentile=100.00%, depth=64 00:29:40.247 00:29:40.247 Run status group 0 (all jobs): 00:29:40.247 WRITE: bw=162MiB/s (170MB/s), 162MiB/s-162MiB/s (170MB/s-170MB/s), io=811MiB (850MB), run=5002-5002msec 00:29:40.247 ----------------------------------------------------- 00:29:40.247 Suppressions used: 00:29:40.247 count bytes template 00:29:40.247 1 11 /usr/src/fio/parse.c 00:29:40.247 1 8 libtcmalloc_minimal.so 00:29:40.247 1 904 libcrypto.so 00:29:40.247 ----------------------------------------------------- 00:29:40.247 00:29:40.247 ************************************ 00:29:40.247 END TEST xnvme_fio_plugin 00:29:40.247 ************************************ 00:29:40.247 00:29:40.247 real 0m13.469s 00:29:40.247 user 0m6.643s 00:29:40.247 sys 0m6.415s 00:29:40.247 23:11:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:40.247 23:11:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:29:40.247 23:11:18 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:29:40.247 23:11:18 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:29:40.247 23:11:18 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:29:40.247 23:11:18 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:29:40.247 23:11:18 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:40.247 23:11:18 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:40.247 23:11:18 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:29:40.247 ************************************ 00:29:40.247 START TEST xnvme_rpc 00:29:40.247 ************************************ 00:29:40.247 23:11:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:29:40.247 23:11:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:29:40.247 23:11:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:29:40.247 23:11:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:29:40.247 23:11:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:29:40.247 23:11:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=71654 00:29:40.247 23:11:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 71654 00:29:40.247 23:11:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 71654 ']' 00:29:40.247 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:40.247 23:11:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:40.247 23:11:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:40.247 23:11:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:40.247 23:11:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:40.247 23:11:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:40.247 23:11:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:29:40.247 [2024-12-09 23:11:18.645692] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:29:40.247 [2024-12-09 23:11:18.645982] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71654 ] 00:29:40.507 [2024-12-09 23:11:18.807769] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:40.507 [2024-12-09 23:11:18.908388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:41.080 23:11:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:41.080 23:11:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:29:41.080 23:11:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd -c 00:29:41.080 23:11:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:41.080 23:11:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:29:41.080 xnvme_bdev 00:29:41.080 23:11:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:41.080 23:11:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:29:41.080 23:11:19 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:29:41.080 23:11:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:41.080 23:11:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:29:41.080 23:11:19 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:29:41.080 23:11:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:41.343 23:11:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:29:41.343 23:11:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:29:41.343 23:11:19 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:29:41.343 23:11:19 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:29:41.343 23:11:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:41.343 23:11:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:29:41.343 23:11:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:41.343 23:11:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:29:41.343 23:11:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:29:41.343 23:11:19 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:29:41.343 23:11:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:41.343 23:11:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:29:41.343 23:11:19 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:29:41.343 23:11:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:41.343 23:11:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:29:41.343 23:11:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:29:41.343 23:11:19 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:29:41.343 23:11:19 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:29:41.343 23:11:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:41.343 23:11:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:29:41.343 23:11:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:41.343 23:11:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:29:41.343 23:11:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:29:41.343 23:11:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:41.343 23:11:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:29:41.343 23:11:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:41.343 23:11:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 71654 00:29:41.343 23:11:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 71654 ']' 00:29:41.343 23:11:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 71654 00:29:41.343 23:11:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:29:41.343 23:11:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:41.343 23:11:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71654 00:29:41.343 killing process with pid 71654 00:29:41.343 23:11:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:41.343 23:11:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:41.343 23:11:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71654' 00:29:41.343 23:11:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 71654 00:29:41.343 23:11:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 71654 00:29:42.728 ************************************ 00:29:42.728 END TEST xnvme_rpc 00:29:42.728 ************************************ 00:29:42.728 00:29:42.728 real 0m2.610s 00:29:42.728 user 0m2.712s 00:29:42.728 sys 0m0.372s 00:29:42.728 23:11:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:42.728 23:11:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:29:43.021 23:11:21 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:29:43.021 23:11:21 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:43.021 23:11:21 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:43.021 23:11:21 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:29:43.021 ************************************ 00:29:43.021 START TEST xnvme_bdevperf 00:29:43.021 ************************************ 00:29:43.021 23:11:21 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:29:43.021 23:11:21 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:29:43.021 23:11:21 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:29:43.021 23:11:21 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:29:43.021 23:11:21 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:29:43.021 23:11:21 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:29:43.021 23:11:21 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:29:43.021 23:11:21 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:43.021 { 00:29:43.021 "subsystems": [ 00:29:43.021 { 00:29:43.021 "subsystem": "bdev", 00:29:43.021 "config": [ 00:29:43.021 { 00:29:43.021 "params": { 00:29:43.021 "io_mechanism": "io_uring_cmd", 00:29:43.021 "conserve_cpu": true, 00:29:43.021 "filename": "/dev/ng0n1", 00:29:43.021 "name": "xnvme_bdev" 00:29:43.021 }, 00:29:43.021 "method": "bdev_xnvme_create" 00:29:43.021 }, 00:29:43.021 { 00:29:43.021 "method": "bdev_wait_for_examine" 00:29:43.021 } 00:29:43.021 ] 00:29:43.021 } 00:29:43.021 ] 00:29:43.021 } 00:29:43.021 [2024-12-09 23:11:21.281968] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:29:43.021 [2024-12-09 23:11:21.282081] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71717 ] 00:29:43.021 [2024-12-09 23:11:21.442898] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:43.282 [2024-12-09 23:11:21.544472] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:43.544 Running I/O for 5 seconds... 00:29:45.430 49036.00 IOPS, 191.55 MiB/s [2024-12-09T23:11:24.836Z] 52192.50 IOPS, 203.88 MiB/s [2024-12-09T23:11:25.849Z] 53690.00 IOPS, 209.73 MiB/s [2024-12-09T23:11:27.234Z] 55159.75 IOPS, 215.47 MiB/s 00:29:48.772 Latency(us) 00:29:48.772 [2024-12-09T23:11:27.234Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:48.772 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:29:48.772 xnvme_bdev : 5.00 56086.24 219.09 0.00 0.00 1136.81 494.67 5772.21 00:29:48.772 [2024-12-09T23:11:27.234Z] =================================================================================================================== 00:29:48.772 [2024-12-09T23:11:27.234Z] Total : 56086.24 219.09 0.00 0.00 1136.81 494.67 5772.21 00:29:49.343 23:11:27 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:29:49.343 23:11:27 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:29:49.343 23:11:27 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:29:49.343 23:11:27 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:29:49.343 23:11:27 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:49.343 { 00:29:49.343 "subsystems": [ 00:29:49.343 { 00:29:49.343 "subsystem": "bdev", 00:29:49.343 "config": [ 00:29:49.343 { 00:29:49.343 "params": { 00:29:49.343 "io_mechanism": "io_uring_cmd", 00:29:49.343 "conserve_cpu": true, 00:29:49.343 "filename": "/dev/ng0n1", 00:29:49.343 "name": "xnvme_bdev" 00:29:49.343 }, 00:29:49.343 "method": "bdev_xnvme_create" 00:29:49.343 }, 00:29:49.343 { 00:29:49.343 "method": "bdev_wait_for_examine" 00:29:49.343 } 00:29:49.343 ] 00:29:49.343 } 00:29:49.343 ] 00:29:49.343 } 00:29:49.343 [2024-12-09 23:11:27.566823] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:29:49.343 [2024-12-09 23:11:27.566918] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71797 ] 00:29:49.343 [2024-12-09 23:11:27.718390] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:49.602 [2024-12-09 23:11:27.819051] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:49.602 Running I/O for 5 seconds... 00:29:51.926 43854.00 IOPS, 171.30 MiB/s [2024-12-09T23:11:31.328Z] 42774.00 IOPS, 167.09 MiB/s [2024-12-09T23:11:32.271Z] 43675.00 IOPS, 170.61 MiB/s [2024-12-09T23:11:33.215Z] 43191.00 IOPS, 168.71 MiB/s [2024-12-09T23:11:33.215Z] 37693.00 IOPS, 147.24 MiB/s 00:29:54.753 Latency(us) 00:29:54.753 [2024-12-09T23:11:33.215Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:54.753 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:29:54.753 xnvme_bdev : 5.01 37663.22 147.12 0.00 0.00 1694.33 47.66 308120.02 00:29:54.753 [2024-12-09T23:11:33.215Z] =================================================================================================================== 00:29:54.753 [2024-12-09T23:11:33.215Z] Total : 37663.22 147.12 0.00 0.00 1694.33 47.66 308120.02 00:29:55.696 23:11:33 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:29:55.696 23:11:33 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:29:55.696 23:11:33 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:29:55.696 23:11:33 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:29:55.696 23:11:33 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:55.696 { 00:29:55.696 "subsystems": [ 00:29:55.696 { 00:29:55.696 "subsystem": "bdev", 00:29:55.696 "config": [ 00:29:55.696 { 00:29:55.696 "params": { 00:29:55.696 "io_mechanism": "io_uring_cmd", 00:29:55.696 "conserve_cpu": true, 00:29:55.696 "filename": "/dev/ng0n1", 00:29:55.696 "name": "xnvme_bdev" 00:29:55.696 }, 00:29:55.696 "method": "bdev_xnvme_create" 00:29:55.696 }, 00:29:55.696 { 00:29:55.696 "method": "bdev_wait_for_examine" 00:29:55.696 } 00:29:55.696 ] 00:29:55.696 } 00:29:55.696 ] 00:29:55.696 } 00:29:55.696 [2024-12-09 23:11:33.848831] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:29:55.696 [2024-12-09 23:11:33.848951] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71871 ] 00:29:55.696 [2024-12-09 23:11:34.010760] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:55.696 [2024-12-09 23:11:34.111213] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:55.965 Running I/O for 5 seconds... 00:29:58.343 90944.00 IOPS, 355.25 MiB/s [2024-12-09T23:11:37.379Z] 89760.00 IOPS, 350.62 MiB/s [2024-12-09T23:11:38.774Z] 90325.33 IOPS, 352.83 MiB/s [2024-12-09T23:11:39.718Z] 89616.00 IOPS, 350.06 MiB/s [2024-12-09T23:11:39.718Z] 90752.00 IOPS, 354.50 MiB/s 00:30:01.256 Latency(us) 00:30:01.256 [2024-12-09T23:11:39.718Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:01.256 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:30:01.256 xnvme_bdev : 5.00 90707.53 354.33 0.00 0.00 702.10 378.09 2520.62 00:30:01.256 [2024-12-09T23:11:39.718Z] =================================================================================================================== 00:30:01.256 [2024-12-09T23:11:39.718Z] Total : 90707.53 354.33 0.00 0.00 702.10 378.09 2520.62 00:30:01.828 23:11:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:30:01.828 23:11:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:30:01.828 23:11:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:30:01.828 23:11:40 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:30:01.828 23:11:40 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:01.828 { 00:30:01.828 "subsystems": [ 00:30:01.828 { 00:30:01.828 "subsystem": "bdev", 00:30:01.828 "config": [ 00:30:01.828 { 00:30:01.828 "params": { 00:30:01.828 "io_mechanism": "io_uring_cmd", 00:30:01.828 "conserve_cpu": true, 00:30:01.828 "filename": "/dev/ng0n1", 00:30:01.828 "name": "xnvme_bdev" 00:30:01.828 }, 00:30:01.828 "method": "bdev_xnvme_create" 00:30:01.828 }, 00:30:01.828 { 00:30:01.828 "method": "bdev_wait_for_examine" 00:30:01.828 } 00:30:01.828 ] 00:30:01.828 } 00:30:01.828 ] 00:30:01.828 } 00:30:01.828 [2024-12-09 23:11:40.124798] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:30:01.828 [2024-12-09 23:11:40.124913] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71940 ] 00:30:01.828 [2024-12-09 23:11:40.284465] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:02.088 [2024-12-09 23:11:40.384667] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:02.349 Running I/O for 5 seconds... 00:30:04.263 72.00 IOPS, 0.28 MiB/s [2024-12-09T23:11:43.670Z] 232.50 IOPS, 0.91 MiB/s [2024-12-09T23:11:45.053Z] 273.33 IOPS, 1.07 MiB/s [2024-12-09T23:11:45.992Z] 264.00 IOPS, 1.03 MiB/s [2024-12-09T23:11:45.992Z] 303.40 IOPS, 1.19 MiB/s 00:30:07.530 Latency(us) 00:30:07.530 [2024-12-09T23:11:45.992Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:07.530 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:30:07.530 xnvme_bdev : 5.15 307.15 1.20 0.00 0.00 205519.69 51.99 1006632.96 00:30:07.530 [2024-12-09T23:11:45.992Z] =================================================================================================================== 00:30:07.530 [2024-12-09T23:11:45.992Z] Total : 307.15 1.20 0.00 0.00 205519.69 51.99 1006632.96 00:30:08.103 00:30:08.103 real 0m25.280s 00:30:08.103 user 0m18.180s 00:30:08.103 sys 0m6.187s 00:30:08.103 23:11:46 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:08.103 ************************************ 00:30:08.103 END TEST xnvme_bdevperf 00:30:08.103 ************************************ 00:30:08.103 23:11:46 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:08.103 23:11:46 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:30:08.103 23:11:46 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:08.103 23:11:46 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:08.103 23:11:46 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:30:08.103 ************************************ 00:30:08.103 START TEST xnvme_fio_plugin 00:30:08.103 ************************************ 00:30:08.103 23:11:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:30:08.103 23:11:46 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:30:08.103 23:11:46 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:30:08.103 23:11:46 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:30:08.103 23:11:46 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:30:08.103 23:11:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:30:08.103 23:11:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:30:08.103 23:11:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:08.103 23:11:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:30:08.103 23:11:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:08.103 23:11:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:30:08.103 23:11:46 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:30:08.103 23:11:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:30:08.103 23:11:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:30:08.103 23:11:46 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:30:08.103 23:11:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:30:08.103 23:11:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:08.103 23:11:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:30:08.103 23:11:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:30:08.103 23:11:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:30:08.103 23:11:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:30:08.103 23:11:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:30:08.103 23:11:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:30:08.103 23:11:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:30:08.365 { 00:30:08.365 "subsystems": [ 00:30:08.365 { 00:30:08.365 "subsystem": "bdev", 00:30:08.365 "config": [ 00:30:08.365 { 00:30:08.365 "params": { 00:30:08.365 "io_mechanism": "io_uring_cmd", 00:30:08.365 "conserve_cpu": true, 00:30:08.365 "filename": "/dev/ng0n1", 00:30:08.365 "name": "xnvme_bdev" 00:30:08.365 }, 00:30:08.365 "method": "bdev_xnvme_create" 00:30:08.365 }, 00:30:08.365 { 00:30:08.365 "method": "bdev_wait_for_examine" 00:30:08.365 } 00:30:08.365 ] 00:30:08.365 } 00:30:08.365 ] 00:30:08.365 } 00:30:08.365 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:30:08.365 fio-3.35 00:30:08.365 Starting 1 thread 00:30:14.948 00:30:14.948 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72058: Mon Dec 9 23:11:52 2024 00:30:14.949 read: IOPS=53.8k, BW=210MiB/s (220MB/s)(1051MiB/5001msec) 00:30:14.949 slat (nsec): min=2883, max=69080, avg=3923.22, stdev=2094.64 00:30:14.949 clat (usec): min=545, max=3089, avg=1036.23, stdev=261.59 00:30:14.949 lat (usec): min=549, max=3092, avg=1040.15, stdev=262.24 00:30:14.949 clat percentiles (usec): 00:30:14.949 | 1.00th=[ 660], 5.00th=[ 709], 10.00th=[ 750], 20.00th=[ 816], 00:30:14.949 | 30.00th=[ 873], 40.00th=[ 930], 50.00th=[ 988], 60.00th=[ 1057], 00:30:14.949 | 70.00th=[ 1123], 80.00th=[ 1221], 90.00th=[ 1352], 95.00th=[ 1532], 00:30:14.949 | 99.00th=[ 1909], 99.50th=[ 2057], 99.90th=[ 2278], 99.95th=[ 2343], 00:30:14.949 | 99.99th=[ 2507] 00:30:14.949 bw ( KiB/s): min=189440, max=237056, per=99.98%, avg=215152.89, stdev=20554.07, samples=9 00:30:14.949 iops : min=47360, max=59264, avg=53788.22, stdev=5138.52, samples=9 00:30:14.949 lat (usec) : 750=9.69%, 1000=41.92% 00:30:14.949 lat (msec) : 2=47.75%, 4=0.65% 00:30:14.949 cpu : usr=53.36%, sys=44.36%, ctx=12, majf=0, minf=762 00:30:14.949 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:30:14.949 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:14.949 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:30:14.949 issued rwts: total=269052,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:14.949 latency : target=0, window=0, percentile=100.00%, depth=64 00:30:14.949 00:30:14.949 Run status group 0 (all jobs): 00:30:14.949 READ: bw=210MiB/s (220MB/s), 210MiB/s-210MiB/s (220MB/s-220MB/s), io=1051MiB (1102MB), run=5001-5001msec 00:30:14.949 ----------------------------------------------------- 00:30:14.949 Suppressions used: 00:30:14.949 count bytes template 00:30:14.949 1 11 /usr/src/fio/parse.c 00:30:14.949 1 8 libtcmalloc_minimal.so 00:30:14.949 1 904 libcrypto.so 00:30:14.949 ----------------------------------------------------- 00:30:14.949 00:30:14.949 23:11:53 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:30:14.949 23:11:53 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:30:14.949 23:11:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:30:14.949 23:11:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:30:14.949 23:11:53 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:30:14.949 23:11:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:14.949 23:11:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:30:14.949 23:11:53 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:30:14.949 23:11:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:14.949 23:11:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:30:14.949 23:11:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:30:14.949 23:11:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:30:14.949 23:11:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:30:14.949 23:11:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:14.949 23:11:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:30:14.949 23:11:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:30:14.949 23:11:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:30:14.949 23:11:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:30:14.949 23:11:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:30:14.949 23:11:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:30:14.949 23:11:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:30:14.949 { 00:30:14.949 "subsystems": [ 00:30:14.949 { 00:30:14.949 "subsystem": "bdev", 00:30:14.949 "config": [ 00:30:14.949 { 00:30:14.949 "params": { 00:30:14.949 "io_mechanism": "io_uring_cmd", 00:30:14.949 "conserve_cpu": true, 00:30:14.949 "filename": "/dev/ng0n1", 00:30:14.949 "name": "xnvme_bdev" 00:30:14.949 }, 00:30:14.949 "method": "bdev_xnvme_create" 00:30:14.949 }, 00:30:14.949 { 00:30:14.949 "method": "bdev_wait_for_examine" 00:30:14.949 } 00:30:14.949 ] 00:30:14.949 } 00:30:14.949 ] 00:30:14.949 } 00:30:15.210 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:30:15.210 fio-3.35 00:30:15.210 Starting 1 thread 00:30:21.796 00:30:21.796 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72149: Mon Dec 9 23:11:59 2024 00:30:21.796 write: IOPS=46.4k, BW=181MiB/s (190MB/s)(906MiB/5001msec); 0 zone resets 00:30:21.796 slat (usec): min=2, max=772, avg= 4.20, stdev= 3.43 00:30:21.796 clat (usec): min=51, max=21184, avg=1224.14, stdev=690.65 00:30:21.796 lat (usec): min=54, max=21187, avg=1228.34, stdev=690.87 00:30:21.796 clat percentiles (usec): 00:30:21.796 | 1.00th=[ 412], 5.00th=[ 734], 10.00th=[ 832], 20.00th=[ 922], 00:30:21.796 | 30.00th=[ 996], 40.00th=[ 1074], 50.00th=[ 1139], 60.00th=[ 1205], 00:30:21.796 | 70.00th=[ 1270], 80.00th=[ 1352], 90.00th=[ 1516], 95.00th=[ 1795], 00:30:21.796 | 99.00th=[ 4621], 99.50th=[ 5669], 99.90th=[ 7242], 99.95th=[ 7832], 00:30:21.796 | 99.99th=[20317] 00:30:21.796 bw ( KiB/s): min=172808, max=204464, per=99.82%, avg=185265.78, stdev=12053.68, samples=9 00:30:21.796 iops : min=43202, max=51116, avg=46316.44, stdev=3013.42, samples=9 00:30:21.796 lat (usec) : 100=0.06%, 250=0.24%, 500=1.38%, 750=3.96%, 1000=24.75% 00:30:21.796 lat (msec) : 2=66.01%, 4=2.16%, 10=1.41%, 20=0.02%, 50=0.01% 00:30:21.796 cpu : usr=48.44%, sys=47.06%, ctx=21, majf=0, minf=763 00:30:21.796 IO depths : 1=1.3%, 2=2.7%, 4=5.5%, 8=11.3%, 16=23.8%, 32=53.3%, >=64=2.0% 00:30:21.796 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:21.796 complete : 0=0.0%, 4=98.2%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:30:21.796 issued rwts: total=0,232036,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:21.796 latency : target=0, window=0, percentile=100.00%, depth=64 00:30:21.796 00:30:21.796 Run status group 0 (all jobs): 00:30:21.796 WRITE: bw=181MiB/s (190MB/s), 181MiB/s-181MiB/s (190MB/s-190MB/s), io=906MiB (950MB), run=5001-5001msec 00:30:21.796 ----------------------------------------------------- 00:30:21.796 Suppressions used: 00:30:21.796 count bytes template 00:30:21.796 1 11 /usr/src/fio/parse.c 00:30:21.796 1 8 libtcmalloc_minimal.so 00:30:21.796 1 904 libcrypto.so 00:30:21.796 ----------------------------------------------------- 00:30:21.796 00:30:21.796 ************************************ 00:30:21.796 END TEST xnvme_fio_plugin 00:30:21.796 ************************************ 00:30:21.796 00:30:21.796 real 0m13.508s 00:30:21.796 user 0m7.791s 00:30:21.796 sys 0m5.042s 00:30:21.796 23:12:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:21.796 23:12:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:30:21.796 Process with pid 71654 is not found 00:30:21.796 23:12:00 nvme_xnvme -- xnvme/xnvme.sh@1 -- # killprocess 71654 00:30:21.796 23:12:00 nvme_xnvme -- common/autotest_common.sh@954 -- # '[' -z 71654 ']' 00:30:21.796 23:12:00 nvme_xnvme -- common/autotest_common.sh@958 -- # kill -0 71654 00:30:21.796 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (71654) - No such process 00:30:21.796 23:12:00 nvme_xnvme -- common/autotest_common.sh@981 -- # echo 'Process with pid 71654 is not found' 00:30:21.796 ************************************ 00:30:21.796 END TEST nvme_xnvme 00:30:21.796 ************************************ 00:30:21.796 00:30:21.796 real 3m25.491s 00:30:21.796 user 1m58.124s 00:30:21.796 sys 1m13.123s 00:30:21.796 23:12:00 nvme_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:21.796 23:12:00 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:30:21.796 23:12:00 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:30:21.796 23:12:00 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:21.796 23:12:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:21.796 23:12:00 -- common/autotest_common.sh@10 -- # set +x 00:30:21.796 ************************************ 00:30:21.796 START TEST blockdev_xnvme 00:30:21.796 ************************************ 00:30:21.796 23:12:00 blockdev_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:30:21.796 * Looking for test storage... 00:30:21.796 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:30:21.796 23:12:00 blockdev_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:21.796 23:12:00 blockdev_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:21.796 23:12:00 blockdev_xnvme -- common/autotest_common.sh@1711 -- # lcov --version 00:30:21.796 23:12:00 blockdev_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:21.796 23:12:00 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:21.796 23:12:00 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:21.796 23:12:00 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:21.796 23:12:00 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:30:21.796 23:12:00 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:30:21.796 23:12:00 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:30:21.796 23:12:00 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:30:21.796 23:12:00 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:30:21.796 23:12:00 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:30:21.796 23:12:00 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:30:21.796 23:12:00 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:21.796 23:12:00 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:30:21.796 23:12:00 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:30:21.796 23:12:00 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:21.796 23:12:00 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:21.796 23:12:00 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:30:21.796 23:12:00 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:30:21.796 23:12:00 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:21.796 23:12:00 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:30:21.796 23:12:00 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:30:21.796 23:12:00 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:30:21.796 23:12:00 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:30:21.796 23:12:00 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:21.796 23:12:00 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:30:21.796 23:12:00 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:30:21.796 23:12:00 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:21.796 23:12:00 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:21.796 23:12:00 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:30:21.796 23:12:00 blockdev_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:21.796 23:12:00 blockdev_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:21.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:21.796 --rc genhtml_branch_coverage=1 00:30:21.796 --rc genhtml_function_coverage=1 00:30:21.796 --rc genhtml_legend=1 00:30:21.796 --rc geninfo_all_blocks=1 00:30:21.796 --rc geninfo_unexecuted_blocks=1 00:30:21.796 00:30:21.796 ' 00:30:21.796 23:12:00 blockdev_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:21.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:21.796 --rc genhtml_branch_coverage=1 00:30:21.796 --rc genhtml_function_coverage=1 00:30:21.796 --rc genhtml_legend=1 00:30:21.796 --rc geninfo_all_blocks=1 00:30:21.796 --rc geninfo_unexecuted_blocks=1 00:30:21.796 00:30:21.796 ' 00:30:21.796 23:12:00 blockdev_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:21.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:21.796 --rc genhtml_branch_coverage=1 00:30:21.796 --rc genhtml_function_coverage=1 00:30:21.796 --rc genhtml_legend=1 00:30:21.796 --rc geninfo_all_blocks=1 00:30:21.796 --rc geninfo_unexecuted_blocks=1 00:30:21.796 00:30:21.796 ' 00:30:21.796 23:12:00 blockdev_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:21.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:21.796 --rc genhtml_branch_coverage=1 00:30:21.796 --rc genhtml_function_coverage=1 00:30:21.796 --rc genhtml_legend=1 00:30:21.796 --rc geninfo_all_blocks=1 00:30:21.796 --rc geninfo_unexecuted_blocks=1 00:30:21.796 00:30:21.796 ' 00:30:21.796 23:12:00 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:30:21.796 23:12:00 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:30:21.796 23:12:00 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:30:21.796 23:12:00 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:30:21.796 23:12:00 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:30:21.796 23:12:00 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:30:21.797 23:12:00 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:30:21.797 23:12:00 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:30:21.797 23:12:00 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:30:22.057 23:12:00 blockdev_xnvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:30:22.057 23:12:00 blockdev_xnvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:30:22.057 23:12:00 blockdev_xnvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:30:22.057 23:12:00 blockdev_xnvme -- bdev/blockdev.sh@711 -- # uname -s 00:30:22.057 23:12:00 blockdev_xnvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:30:22.057 23:12:00 blockdev_xnvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:30:22.057 23:12:00 blockdev_xnvme -- bdev/blockdev.sh@719 -- # test_type=xnvme 00:30:22.057 23:12:00 blockdev_xnvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:30:22.057 23:12:00 blockdev_xnvme -- bdev/blockdev.sh@721 -- # dek= 00:30:22.057 23:12:00 blockdev_xnvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:30:22.057 23:12:00 blockdev_xnvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:30:22.057 23:12:00 blockdev_xnvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:30:22.057 23:12:00 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == bdev ]] 00:30:22.057 23:12:00 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == crypto_* ]] 00:30:22.057 23:12:00 blockdev_xnvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:30:22.057 23:12:00 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=72283 00:30:22.057 23:12:00 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:30:22.057 23:12:00 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:30:22.057 23:12:00 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 72283 00:30:22.057 23:12:00 blockdev_xnvme -- common/autotest_common.sh@835 -- # '[' -z 72283 ']' 00:30:22.057 23:12:00 blockdev_xnvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:22.057 23:12:00 blockdev_xnvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:22.057 23:12:00 blockdev_xnvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:22.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:22.057 23:12:00 blockdev_xnvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:22.057 23:12:00 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:30:22.057 [2024-12-09 23:12:00.332305] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:30:22.057 [2024-12-09 23:12:00.332532] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72283 ] 00:30:22.057 [2024-12-09 23:12:00.483851] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:22.317 [2024-12-09 23:12:00.568632] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:22.889 23:12:01 blockdev_xnvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:22.889 23:12:01 blockdev_xnvme -- common/autotest_common.sh@868 -- # return 0 00:30:22.889 23:12:01 blockdev_xnvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:30:22.889 23:12:01 blockdev_xnvme -- bdev/blockdev.sh@766 -- # setup_xnvme_conf 00:30:22.889 23:12:01 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:30:22.889 23:12:01 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:30:22.889 23:12:01 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:30:23.150 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:23.723 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:30:23.723 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:30:23.723 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:30:23.723 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:30:23.723 23:12:02 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:30:23.723 23:12:02 blockdev_xnvme -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:30:23.723 23:12:02 blockdev_xnvme -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:30:23.723 23:12:02 blockdev_xnvme -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:30:23.723 23:12:02 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:30:23.723 23:12:02 blockdev_xnvme -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:30:23.723 23:12:02 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:30:23.723 23:12:02 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0 00:30:23.723 23:12:02 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:30:23.723 23:12:02 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:30:23.723 23:12:02 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:30:23.723 23:12:02 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:30:23.723 23:12:02 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:30:23.723 23:12:02 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:30:23.723 23:12:02 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n2 00:30:23.723 23:12:02 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:30:23.723 23:12:02 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:30:23.723 23:12:02 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:30:23.723 23:12:02 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:30:23.723 23:12:02 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n3 00:30:23.723 23:12:02 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:30:23.723 23:12:02 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:30:23.723 23:12:02 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:30:23.723 23:12:02 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:30:23.723 23:12:02 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0 00:30:23.723 23:12:02 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:30:23.723 23:12:02 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1c1n1 00:30:23.723 23:12:02 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1c1n1 00:30:23.723 23:12:02 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1c1n1/queue/zoned ]] 00:30:23.723 23:12:02 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:30:23.723 23:12:02 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:30:23.723 23:12:02 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:30:23.723 23:12:02 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:30:23.723 23:12:02 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n1 00:30:23.723 23:12:02 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:30:23.723 23:12:02 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:30:23.723 23:12:02 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:30:23.723 23:12:02 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:30:23.723 23:12:02 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:30:23.723 23:12:02 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:30:23.723 23:12:02 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3n1 00:30:23.723 23:12:02 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:30:23.723 23:12:02 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:30:23.723 23:12:02 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:30:23.723 23:12:02 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:30:23.723 23:12:02 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:30:23.723 23:12:02 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:30:23.723 23:12:02 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:30:23.723 23:12:02 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:30:23.723 23:12:02 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n2 ]] 00:30:23.723 23:12:02 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:30:23.723 23:12:02 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:30:23.723 23:12:02 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:30:23.723 23:12:02 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n3 ]] 00:30:23.723 23:12:02 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:30:23.723 23:12:02 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:30:23.723 23:12:02 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:30:23.723 23:12:02 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:30:23.723 23:12:02 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:30:23.723 23:12:02 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:30:23.723 23:12:02 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:30:23.723 23:12:02 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:30:23.723 23:12:02 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:30:23.723 23:12:02 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:30:23.723 23:12:02 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:30:23.724 23:12:02 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:30:23.724 23:12:02 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:30:23.724 23:12:02 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:30:23.724 23:12:02 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:30:23.724 23:12:02 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:30:23.724 23:12:02 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:23.724 23:12:02 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:30:23.724 23:12:02 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring -c' 'bdev_xnvme_create /dev/nvme0n2 nvme0n2 io_uring -c' 'bdev_xnvme_create /dev/nvme0n3 nvme0n3 io_uring -c' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring -c' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring -c' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring -c' 00:30:23.724 nvme0n1 00:30:23.724 nvme0n2 00:30:23.724 nvme0n3 00:30:23.724 nvme1n1 00:30:23.724 nvme2n1 00:30:23.724 nvme3n1 00:30:23.724 23:12:02 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:23.724 23:12:02 blockdev_xnvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:30:23.724 23:12:02 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:23.724 23:12:02 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:30:23.724 23:12:02 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:23.724 23:12:02 blockdev_xnvme -- bdev/blockdev.sh@777 -- # cat 00:30:23.724 23:12:02 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:30:23.724 23:12:02 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:23.724 23:12:02 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:30:23.724 23:12:02 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:23.724 23:12:02 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:30:23.724 23:12:02 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:23.724 23:12:02 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:30:23.724 23:12:02 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:23.724 23:12:02 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:30:23.724 23:12:02 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:23.724 23:12:02 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:30:23.724 23:12:02 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:23.724 23:12:02 blockdev_xnvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:30:23.724 23:12:02 blockdev_xnvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:30:23.724 23:12:02 blockdev_xnvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:30:23.724 23:12:02 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:23.724 23:12:02 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:30:23.985 23:12:02 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:23.985 23:12:02 blockdev_xnvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:30:23.985 23:12:02 blockdev_xnvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "754fbc56-c02f-49a9-84c2-e10f26347799"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "754fbc56-c02f-49a9-84c2-e10f26347799",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "22c4bf3d-af4a-4911-a440-00e91f48e150"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "22c4bf3d-af4a-4911-a440-00e91f48e150",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "70472628-46de-40c1-96b0-ac34cac6ec68"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "70472628-46de-40c1-96b0-ac34cac6ec68",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "af20d48a-7364-4a63-97db-3f42b4088fdd"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "af20d48a-7364-4a63-97db-3f42b4088fdd",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "274afcc1-a31f-40f9-a269-78d5f5603aa4"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "274afcc1-a31f-40f9-a269-78d5f5603aa4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "569f2251-bde7-4dab-aeb5-c8100e834027"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "569f2251-bde7-4dab-aeb5-c8100e834027",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:30:23.985 23:12:02 blockdev_xnvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:30:23.985 23:12:02 blockdev_xnvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:30:23.985 23:12:02 blockdev_xnvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=nvme0n1 00:30:23.985 23:12:02 blockdev_xnvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:30:23.985 23:12:02 blockdev_xnvme -- bdev/blockdev.sh@791 -- # killprocess 72283 00:30:23.985 23:12:02 blockdev_xnvme -- common/autotest_common.sh@954 -- # '[' -z 72283 ']' 00:30:23.985 23:12:02 blockdev_xnvme -- common/autotest_common.sh@958 -- # kill -0 72283 00:30:23.985 23:12:02 blockdev_xnvme -- common/autotest_common.sh@959 -- # uname 00:30:23.985 23:12:02 blockdev_xnvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:23.985 23:12:02 blockdev_xnvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72283 00:30:23.985 killing process with pid 72283 00:30:23.985 23:12:02 blockdev_xnvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:23.985 23:12:02 blockdev_xnvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:23.985 23:12:02 blockdev_xnvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72283' 00:30:23.985 23:12:02 blockdev_xnvme -- common/autotest_common.sh@973 -- # kill 72283 00:30:23.985 23:12:02 blockdev_xnvme -- common/autotest_common.sh@978 -- # wait 72283 00:30:25.370 23:12:03 blockdev_xnvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:30:25.370 23:12:03 blockdev_xnvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:30:25.370 23:12:03 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:30:25.370 23:12:03 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:25.370 23:12:03 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:30:25.370 ************************************ 00:30:25.370 START TEST bdev_hello_world 00:30:25.370 ************************************ 00:30:25.370 23:12:03 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:30:25.370 [2024-12-09 23:12:03.519630] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:30:25.370 [2024-12-09 23:12:03.519887] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72557 ] 00:30:25.370 [2024-12-09 23:12:03.675561] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:25.370 [2024-12-09 23:12:03.773905] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:25.985 [2024-12-09 23:12:04.107770] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:30:25.985 [2024-12-09 23:12:04.107935] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:30:25.985 [2024-12-09 23:12:04.107956] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:30:25.985 [2024-12-09 23:12:04.109806] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:30:25.985 [2024-12-09 23:12:04.110001] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:30:25.985 [2024-12-09 23:12:04.110021] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:30:25.985 [2024-12-09 23:12:04.110239] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:30:25.985 00:30:25.985 [2024-12-09 23:12:04.110256] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:30:26.556 00:30:26.556 real 0m1.366s 00:30:26.556 user 0m1.098s 00:30:26.556 sys 0m0.156s 00:30:26.556 ************************************ 00:30:26.556 END TEST bdev_hello_world 00:30:26.556 ************************************ 00:30:26.556 23:12:04 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:26.556 23:12:04 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:30:26.556 23:12:04 blockdev_xnvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:30:26.556 23:12:04 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:26.556 23:12:04 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:26.556 23:12:04 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:30:26.556 ************************************ 00:30:26.556 START TEST bdev_bounds 00:30:26.556 ************************************ 00:30:26.556 23:12:04 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:30:26.556 Process bdevio pid: 72588 00:30:26.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:26.556 23:12:04 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=72588 00:30:26.556 23:12:04 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:30:26.556 23:12:04 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 72588' 00:30:26.556 23:12:04 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 72588 00:30:26.556 23:12:04 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 72588 ']' 00:30:26.556 23:12:04 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:26.556 23:12:04 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:26.556 23:12:04 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:30:26.556 23:12:04 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:26.556 23:12:04 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:26.556 23:12:04 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:30:26.556 [2024-12-09 23:12:04.932144] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:30:26.556 [2024-12-09 23:12:04.932281] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72588 ] 00:30:26.823 [2024-12-09 23:12:05.092244] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:26.823 [2024-12-09 23:12:05.195198] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:26.823 [2024-12-09 23:12:05.195300] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:26.823 [2024-12-09 23:12:05.195304] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:27.399 23:12:05 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:27.399 23:12:05 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:30:27.399 23:12:05 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:30:27.661 I/O targets: 00:30:27.661 nvme0n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:30:27.661 nvme0n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:30:27.661 nvme0n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:30:27.661 nvme1n1: 262144 blocks of 4096 bytes (1024 MiB) 00:30:27.661 nvme2n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:30:27.661 nvme3n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:30:27.661 00:30:27.661 00:30:27.661 CUnit - A unit testing framework for C - Version 2.1-3 00:30:27.661 http://cunit.sourceforge.net/ 00:30:27.661 00:30:27.661 00:30:27.661 Suite: bdevio tests on: nvme3n1 00:30:27.661 Test: blockdev write read block ...passed 00:30:27.661 Test: blockdev write zeroes read block ...passed 00:30:27.661 Test: blockdev write zeroes read no split ...passed 00:30:27.661 Test: blockdev write zeroes read split ...passed 00:30:27.661 Test: blockdev write zeroes read split partial ...passed 00:30:27.661 Test: blockdev reset ...passed 00:30:27.661 Test: blockdev write read 8 blocks ...passed 00:30:27.661 Test: blockdev write read size > 128k ...passed 00:30:27.661 Test: blockdev write read invalid size ...passed 00:30:27.661 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:30:27.661 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:30:27.661 Test: blockdev write read max offset ...passed 00:30:27.661 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:30:27.661 Test: blockdev writev readv 8 blocks ...passed 00:30:27.661 Test: blockdev writev readv 30 x 1block ...passed 00:30:27.661 Test: blockdev writev readv block ...passed 00:30:27.661 Test: blockdev writev readv size > 128k ...passed 00:30:27.661 Test: blockdev writev readv size > 128k in two iovs ...passed 00:30:27.661 Test: blockdev comparev and writev ...passed 00:30:27.661 Test: blockdev nvme passthru rw ...passed 00:30:27.661 Test: blockdev nvme passthru vendor specific ...passed 00:30:27.661 Test: blockdev nvme admin passthru ...passed 00:30:27.661 Test: blockdev copy ...passed 00:30:27.661 Suite: bdevio tests on: nvme2n1 00:30:27.661 Test: blockdev write read block ...passed 00:30:27.661 Test: blockdev write zeroes read block ...passed 00:30:27.661 Test: blockdev write zeroes read no split ...passed 00:30:27.661 Test: blockdev write zeroes read split ...passed 00:30:27.661 Test: blockdev write zeroes read split partial ...passed 00:30:27.661 Test: blockdev reset ...passed 00:30:27.661 Test: blockdev write read 8 blocks ...passed 00:30:27.661 Test: blockdev write read size > 128k ...passed 00:30:27.661 Test: blockdev write read invalid size ...passed 00:30:27.661 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:30:27.661 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:30:27.661 Test: blockdev write read max offset ...passed 00:30:27.661 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:30:27.661 Test: blockdev writev readv 8 blocks ...passed 00:30:27.661 Test: blockdev writev readv 30 x 1block ...passed 00:30:27.661 Test: blockdev writev readv block ...passed 00:30:27.661 Test: blockdev writev readv size > 128k ...passed 00:30:27.661 Test: blockdev writev readv size > 128k in two iovs ...passed 00:30:27.661 Test: blockdev comparev and writev ...passed 00:30:27.661 Test: blockdev nvme passthru rw ...passed 00:30:27.661 Test: blockdev nvme passthru vendor specific ...passed 00:30:27.661 Test: blockdev nvme admin passthru ...passed 00:30:27.661 Test: blockdev copy ...passed 00:30:27.661 Suite: bdevio tests on: nvme1n1 00:30:27.661 Test: blockdev write read block ...passed 00:30:27.661 Test: blockdev write zeroes read block ...passed 00:30:27.661 Test: blockdev write zeroes read no split ...passed 00:30:27.661 Test: blockdev write zeroes read split ...passed 00:30:27.661 Test: blockdev write zeroes read split partial ...passed 00:30:27.661 Test: blockdev reset ...passed 00:30:27.661 Test: blockdev write read 8 blocks ...passed 00:30:27.661 Test: blockdev write read size > 128k ...passed 00:30:27.661 Test: blockdev write read invalid size ...passed 00:30:27.661 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:30:27.661 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:30:27.661 Test: blockdev write read max offset ...passed 00:30:27.661 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:30:27.661 Test: blockdev writev readv 8 blocks ...passed 00:30:27.661 Test: blockdev writev readv 30 x 1block ...passed 00:30:27.661 Test: blockdev writev readv block ...passed 00:30:27.661 Test: blockdev writev readv size > 128k ...passed 00:30:27.661 Test: blockdev writev readv size > 128k in two iovs ...passed 00:30:27.661 Test: blockdev comparev and writev ...passed 00:30:27.661 Test: blockdev nvme passthru rw ...passed 00:30:27.661 Test: blockdev nvme passthru vendor specific ...passed 00:30:27.661 Test: blockdev nvme admin passthru ...passed 00:30:27.661 Test: blockdev copy ...passed 00:30:27.661 Suite: bdevio tests on: nvme0n3 00:30:27.661 Test: blockdev write read block ...passed 00:30:27.661 Test: blockdev write zeroes read block ...passed 00:30:27.661 Test: blockdev write zeroes read no split ...passed 00:30:27.661 Test: blockdev write zeroes read split ...passed 00:30:27.661 Test: blockdev write zeroes read split partial ...passed 00:30:27.661 Test: blockdev reset ...passed 00:30:27.661 Test: blockdev write read 8 blocks ...passed 00:30:27.661 Test: blockdev write read size > 128k ...passed 00:30:27.661 Test: blockdev write read invalid size ...passed 00:30:27.661 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:30:27.661 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:30:27.661 Test: blockdev write read max offset ...passed 00:30:27.661 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:30:27.661 Test: blockdev writev readv 8 blocks ...passed 00:30:27.661 Test: blockdev writev readv 30 x 1block ...passed 00:30:27.661 Test: blockdev writev readv block ...passed 00:30:27.661 Test: blockdev writev readv size > 128k ...passed 00:30:27.661 Test: blockdev writev readv size > 128k in two iovs ...passed 00:30:27.661 Test: blockdev comparev and writev ...passed 00:30:27.661 Test: blockdev nvme passthru rw ...passed 00:30:27.661 Test: blockdev nvme passthru vendor specific ...passed 00:30:27.661 Test: blockdev nvme admin passthru ...passed 00:30:27.661 Test: blockdev copy ...passed 00:30:27.661 Suite: bdevio tests on: nvme0n2 00:30:27.661 Test: blockdev write read block ...passed 00:30:27.661 Test: blockdev write zeroes read block ...passed 00:30:27.661 Test: blockdev write zeroes read no split ...passed 00:30:27.661 Test: blockdev write zeroes read split ...passed 00:30:27.924 Test: blockdev write zeroes read split partial ...passed 00:30:27.924 Test: blockdev reset ...passed 00:30:27.924 Test: blockdev write read 8 blocks ...passed 00:30:27.924 Test: blockdev write read size > 128k ...passed 00:30:27.924 Test: blockdev write read invalid size ...passed 00:30:27.924 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:30:27.924 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:30:27.924 Test: blockdev write read max offset ...passed 00:30:27.924 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:30:27.924 Test: blockdev writev readv 8 blocks ...passed 00:30:27.924 Test: blockdev writev readv 30 x 1block ...passed 00:30:27.924 Test: blockdev writev readv block ...passed 00:30:27.924 Test: blockdev writev readv size > 128k ...passed 00:30:27.924 Test: blockdev writev readv size > 128k in two iovs ...passed 00:30:27.924 Test: blockdev comparev and writev ...passed 00:30:27.924 Test: blockdev nvme passthru rw ...passed 00:30:27.924 Test: blockdev nvme passthru vendor specific ...passed 00:30:27.924 Test: blockdev nvme admin passthru ...passed 00:30:27.924 Test: blockdev copy ...passed 00:30:27.924 Suite: bdevio tests on: nvme0n1 00:30:27.924 Test: blockdev write read block ...passed 00:30:27.924 Test: blockdev write zeroes read block ...passed 00:30:27.924 Test: blockdev write zeroes read no split ...passed 00:30:27.924 Test: blockdev write zeroes read split ...passed 00:30:27.924 Test: blockdev write zeroes read split partial ...passed 00:30:27.924 Test: blockdev reset ...passed 00:30:27.924 Test: blockdev write read 8 blocks ...passed 00:30:27.924 Test: blockdev write read size > 128k ...passed 00:30:27.924 Test: blockdev write read invalid size ...passed 00:30:27.924 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:30:27.924 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:30:27.924 Test: blockdev write read max offset ...passed 00:30:27.924 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:30:27.924 Test: blockdev writev readv 8 blocks ...passed 00:30:27.924 Test: blockdev writev readv 30 x 1block ...passed 00:30:27.924 Test: blockdev writev readv block ...passed 00:30:27.924 Test: blockdev writev readv size > 128k ...passed 00:30:27.924 Test: blockdev writev readv size > 128k in two iovs ...passed 00:30:27.924 Test: blockdev comparev and writev ...passed 00:30:27.924 Test: blockdev nvme passthru rw ...passed 00:30:27.924 Test: blockdev nvme passthru vendor specific ...passed 00:30:27.924 Test: blockdev nvme admin passthru ...passed 00:30:27.924 Test: blockdev copy ...passed 00:30:27.924 00:30:27.924 Run Summary: Type Total Ran Passed Failed Inactive 00:30:27.924 suites 6 6 n/a 0 0 00:30:27.924 tests 138 138 138 0 0 00:30:27.924 asserts 780 780 780 0 n/a 00:30:27.924 00:30:27.924 Elapsed time = 0.862 seconds 00:30:27.924 0 00:30:27.924 23:12:06 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 72588 00:30:27.924 23:12:06 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 72588 ']' 00:30:27.924 23:12:06 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 72588 00:30:27.924 23:12:06 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:30:27.924 23:12:06 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:27.924 23:12:06 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72588 00:30:27.924 killing process with pid 72588 00:30:27.924 23:12:06 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:27.924 23:12:06 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:27.924 23:12:06 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72588' 00:30:27.924 23:12:06 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 72588 00:30:27.924 23:12:06 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 72588 00:30:28.496 23:12:06 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:30:28.496 00:30:28.496 real 0m2.070s 00:30:28.496 user 0m5.217s 00:30:28.496 sys 0m0.262s 00:30:28.496 23:12:06 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:28.496 ************************************ 00:30:28.496 END TEST bdev_bounds 00:30:28.496 23:12:06 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:30:28.496 ************************************ 00:30:28.757 23:12:06 blockdev_xnvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:30:28.758 23:12:06 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:30:28.758 23:12:06 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:28.758 23:12:06 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:30:28.758 ************************************ 00:30:28.758 START TEST bdev_nbd 00:30:28.758 ************************************ 00:30:28.758 23:12:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:30:28.758 23:12:06 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:30:28.758 23:12:06 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:30:28.758 23:12:06 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:28.758 23:12:06 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:30:28.758 23:12:06 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:30:28.758 23:12:06 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:30:28.758 23:12:06 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:30:28.758 23:12:06 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:30:28.758 23:12:06 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:30:28.758 23:12:06 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:30:28.758 23:12:06 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:30:28.758 23:12:06 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:30:28.758 23:12:06 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:30:28.758 23:12:06 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:30:28.758 23:12:06 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:30:28.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:30:28.758 23:12:06 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=72646 00:30:28.758 23:12:06 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:30:28.758 23:12:06 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 72646 /var/tmp/spdk-nbd.sock 00:30:28.758 23:12:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 72646 ']' 00:30:28.758 23:12:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:30:28.758 23:12:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:28.758 23:12:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:30:28.758 23:12:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:28.758 23:12:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:30:28.758 23:12:06 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:30:28.758 [2024-12-09 23:12:07.051340] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:30:28.758 [2024-12-09 23:12:07.051611] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:28.758 [2024-12-09 23:12:07.209331] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:29.029 [2024-12-09 23:12:07.310049] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:29.603 23:12:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:29.603 23:12:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:30:29.603 23:12:07 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:30:29.603 23:12:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:29.603 23:12:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:30:29.603 23:12:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:30:29.603 23:12:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:30:29.603 23:12:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:29.603 23:12:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:30:29.603 23:12:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:30:29.603 23:12:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:30:29.603 23:12:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:30:29.603 23:12:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:30:29.603 23:12:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:30:29.603 23:12:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:30:29.864 23:12:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:30:29.864 23:12:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:30:29.864 23:12:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:30:29.864 23:12:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:30:29.864 23:12:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:30:29.864 23:12:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:30:29.864 23:12:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:30:29.864 23:12:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:30:29.864 23:12:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:30:29.864 23:12:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:30:29.864 23:12:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:30:29.864 23:12:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:29.864 1+0 records in 00:30:29.864 1+0 records out 00:30:29.864 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000412874 s, 9.9 MB/s 00:30:29.864 23:12:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:29.864 23:12:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:30:29.864 23:12:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:29.864 23:12:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:30:29.864 23:12:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:30:29.864 23:12:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:30:29.864 23:12:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:30:29.864 23:12:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 00:30:30.126 23:12:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:30:30.126 23:12:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:30:30.126 23:12:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:30:30.126 23:12:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:30:30.126 23:12:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:30:30.126 23:12:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:30:30.126 23:12:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:30:30.126 23:12:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:30:30.126 23:12:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:30:30.126 23:12:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:30:30.126 23:12:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:30:30.126 23:12:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:30.126 1+0 records in 00:30:30.126 1+0 records out 00:30:30.126 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000413532 s, 9.9 MB/s 00:30:30.126 23:12:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:30.126 23:12:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:30:30.126 23:12:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:30.126 23:12:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:30:30.126 23:12:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:30:30.126 23:12:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:30:30.126 23:12:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:30:30.126 23:12:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 00:30:30.390 23:12:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:30:30.390 23:12:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:30:30.390 23:12:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:30:30.390 23:12:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:30:30.390 23:12:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:30:30.390 23:12:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:30:30.390 23:12:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:30:30.390 23:12:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:30:30.390 23:12:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:30:30.390 23:12:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:30:30.390 23:12:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:30:30.390 23:12:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:30.390 1+0 records in 00:30:30.390 1+0 records out 00:30:30.390 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000468602 s, 8.7 MB/s 00:30:30.390 23:12:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:30.390 23:12:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:30:30.390 23:12:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:30.390 23:12:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:30:30.390 23:12:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:30:30.390 23:12:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:30:30.390 23:12:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:30:30.390 23:12:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:30:30.390 23:12:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:30:30.390 23:12:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:30:30.390 23:12:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:30:30.390 23:12:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:30:30.390 23:12:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:30:30.390 23:12:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:30:30.390 23:12:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:30:30.390 23:12:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:30:30.390 23:12:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:30:30.390 23:12:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:30:30.390 23:12:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:30:30.390 23:12:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:30.390 1+0 records in 00:30:30.390 1+0 records out 00:30:30.390 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000338694 s, 12.1 MB/s 00:30:30.390 23:12:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:30.390 23:12:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:30:30.390 23:12:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:30.390 23:12:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:30:30.390 23:12:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:30:30.390 23:12:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:30:30.390 23:12:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:30:30.390 23:12:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:30:30.652 23:12:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:30:30.652 23:12:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:30:30.652 23:12:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:30:30.652 23:12:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:30:30.652 23:12:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:30:30.652 23:12:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:30:30.652 23:12:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:30:30.652 23:12:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:30:30.652 23:12:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:30:30.652 23:12:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:30:30.652 23:12:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:30:30.652 23:12:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:30.652 1+0 records in 00:30:30.652 1+0 records out 00:30:30.652 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000605173 s, 6.8 MB/s 00:30:30.652 23:12:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:30.652 23:12:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:30:30.652 23:12:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:30.652 23:12:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:30:30.652 23:12:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:30:30.652 23:12:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:30:30.652 23:12:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:30:30.652 23:12:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:30:30.913 23:12:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:30:30.913 23:12:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:30:30.913 23:12:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:30:30.913 23:12:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:30:30.913 23:12:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:30:30.913 23:12:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:30:30.913 23:12:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:30:30.913 23:12:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:30:30.913 23:12:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:30:30.913 23:12:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:30:30.913 23:12:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:30:30.913 23:12:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:30.913 1+0 records in 00:30:30.913 1+0 records out 00:30:30.913 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000528022 s, 7.8 MB/s 00:30:30.913 23:12:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:30.913 23:12:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:30:30.913 23:12:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:30.913 23:12:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:30:30.913 23:12:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:30:30.913 23:12:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:30:30.913 23:12:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:30:30.913 23:12:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:30:31.174 23:12:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:30:31.174 { 00:30:31.174 "nbd_device": "/dev/nbd0", 00:30:31.174 "bdev_name": "nvme0n1" 00:30:31.174 }, 00:30:31.174 { 00:30:31.174 "nbd_device": "/dev/nbd1", 00:30:31.174 "bdev_name": "nvme0n2" 00:30:31.174 }, 00:30:31.174 { 00:30:31.174 "nbd_device": "/dev/nbd2", 00:30:31.174 "bdev_name": "nvme0n3" 00:30:31.174 }, 00:30:31.174 { 00:30:31.174 "nbd_device": "/dev/nbd3", 00:30:31.174 "bdev_name": "nvme1n1" 00:30:31.174 }, 00:30:31.174 { 00:30:31.174 "nbd_device": "/dev/nbd4", 00:30:31.174 "bdev_name": "nvme2n1" 00:30:31.174 }, 00:30:31.174 { 00:30:31.174 "nbd_device": "/dev/nbd5", 00:30:31.174 "bdev_name": "nvme3n1" 00:30:31.174 } 00:30:31.174 ]' 00:30:31.174 23:12:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:30:31.174 23:12:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:30:31.174 23:12:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:30:31.174 { 00:30:31.174 "nbd_device": "/dev/nbd0", 00:30:31.174 "bdev_name": "nvme0n1" 00:30:31.174 }, 00:30:31.174 { 00:30:31.174 "nbd_device": "/dev/nbd1", 00:30:31.174 "bdev_name": "nvme0n2" 00:30:31.174 }, 00:30:31.174 { 00:30:31.174 "nbd_device": "/dev/nbd2", 00:30:31.174 "bdev_name": "nvme0n3" 00:30:31.174 }, 00:30:31.174 { 00:30:31.174 "nbd_device": "/dev/nbd3", 00:30:31.174 "bdev_name": "nvme1n1" 00:30:31.174 }, 00:30:31.174 { 00:30:31.174 "nbd_device": "/dev/nbd4", 00:30:31.174 "bdev_name": "nvme2n1" 00:30:31.174 }, 00:30:31.174 { 00:30:31.174 "nbd_device": "/dev/nbd5", 00:30:31.174 "bdev_name": "nvme3n1" 00:30:31.174 } 00:30:31.174 ]' 00:30:31.174 23:12:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:30:31.174 23:12:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:31.174 23:12:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:30:31.174 23:12:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:30:31.174 23:12:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:30:31.174 23:12:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:31.174 23:12:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:30:31.435 23:12:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:30:31.435 23:12:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:30:31.435 23:12:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:30:31.435 23:12:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:31.435 23:12:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:31.435 23:12:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:30:31.435 23:12:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:30:31.435 23:12:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:30:31.435 23:12:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:31.435 23:12:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:30:31.695 23:12:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:30:31.695 23:12:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:30:31.695 23:12:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:30:31.695 23:12:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:31.695 23:12:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:31.695 23:12:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:30:31.695 23:12:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:30:31.695 23:12:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:30:31.695 23:12:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:31.695 23:12:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:30:31.954 23:12:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:30:31.954 23:12:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:30:31.954 23:12:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:30:31.954 23:12:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:31.954 23:12:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:31.954 23:12:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:30:31.954 23:12:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:30:31.954 23:12:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:30:31.955 23:12:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:31.955 23:12:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:30:32.215 23:12:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:30:32.215 23:12:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:30:32.215 23:12:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:30:32.215 23:12:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:32.215 23:12:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:32.215 23:12:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:30:32.215 23:12:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:30:32.215 23:12:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:30:32.215 23:12:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:32.215 23:12:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:30:32.215 23:12:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:30:32.215 23:12:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:30:32.215 23:12:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:30:32.215 23:12:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:32.215 23:12:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:32.215 23:12:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:30:32.215 23:12:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:30:32.215 23:12:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:30:32.215 23:12:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:32.215 23:12:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:30:32.520 23:12:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:30:32.520 23:12:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:30:32.520 23:12:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:30:32.520 23:12:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:32.520 23:12:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:32.520 23:12:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:30:32.520 23:12:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:30:32.520 23:12:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:30:32.520 23:12:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:30:32.520 23:12:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:32.520 23:12:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:30:32.781 23:12:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:30:32.781 23:12:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:30:32.781 23:12:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:30:32.781 23:12:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:30:32.781 23:12:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:30:32.781 23:12:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:30:32.781 23:12:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:30:32.781 23:12:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:30:32.781 23:12:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:30:32.781 23:12:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:30:32.781 23:12:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:30:32.781 23:12:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:30:32.781 23:12:11 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:30:32.781 23:12:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:32.781 23:12:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:30:32.781 23:12:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:30:32.781 23:12:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:30:32.781 23:12:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:30:32.781 23:12:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:30:32.781 23:12:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:32.781 23:12:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:30:32.781 23:12:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:30:32.781 23:12:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:30:32.781 23:12:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:30:32.782 23:12:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:30:32.782 23:12:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:30:32.782 23:12:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:30:32.782 23:12:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:30:33.043 /dev/nbd0 00:30:33.043 23:12:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:30:33.043 23:12:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:30:33.043 23:12:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:30:33.043 23:12:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:30:33.043 23:12:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:30:33.043 23:12:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:30:33.043 23:12:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:30:33.043 23:12:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:30:33.043 23:12:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:30:33.043 23:12:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:30:33.043 23:12:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:33.043 1+0 records in 00:30:33.043 1+0 records out 00:30:33.043 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00161222 s, 2.5 MB/s 00:30:33.043 23:12:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:33.043 23:12:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:30:33.043 23:12:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:33.043 23:12:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:30:33.043 23:12:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:30:33.043 23:12:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:33.043 23:12:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:30:33.043 23:12:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 /dev/nbd1 00:30:33.305 /dev/nbd1 00:30:33.305 23:12:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:30:33.305 23:12:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:30:33.305 23:12:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:30:33.305 23:12:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:30:33.305 23:12:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:30:33.305 23:12:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:30:33.305 23:12:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:30:33.305 23:12:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:30:33.305 23:12:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:30:33.305 23:12:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:30:33.305 23:12:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:33.305 1+0 records in 00:30:33.305 1+0 records out 00:30:33.305 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000364531 s, 11.2 MB/s 00:30:33.305 23:12:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:33.305 23:12:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:30:33.305 23:12:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:33.305 23:12:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:30:33.305 23:12:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:30:33.305 23:12:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:33.305 23:12:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:30:33.305 23:12:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 /dev/nbd10 00:30:33.305 /dev/nbd10 00:30:33.565 23:12:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:30:33.565 23:12:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:30:33.565 23:12:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:30:33.565 23:12:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:30:33.566 23:12:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:30:33.566 23:12:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:30:33.566 23:12:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:30:33.566 23:12:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:30:33.566 23:12:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:30:33.566 23:12:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:30:33.566 23:12:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:33.566 1+0 records in 00:30:33.566 1+0 records out 00:30:33.566 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000635883 s, 6.4 MB/s 00:30:33.566 23:12:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:33.566 23:12:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:30:33.566 23:12:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:33.566 23:12:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:30:33.566 23:12:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:30:33.566 23:12:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:33.566 23:12:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:30:33.566 23:12:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd11 00:30:33.566 /dev/nbd11 00:30:33.566 23:12:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:30:33.566 23:12:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:30:33.566 23:12:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:30:33.566 23:12:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:30:33.566 23:12:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:30:33.566 23:12:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:30:33.566 23:12:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:30:33.566 23:12:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:30:33.566 23:12:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:30:33.566 23:12:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:30:33.566 23:12:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:33.566 1+0 records in 00:30:33.566 1+0 records out 00:30:33.566 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000372802 s, 11.0 MB/s 00:30:33.566 23:12:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:33.566 23:12:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:30:33.566 23:12:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:33.566 23:12:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:30:33.566 23:12:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:30:33.566 23:12:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:33.566 23:12:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:30:33.566 23:12:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd12 00:30:33.826 /dev/nbd12 00:30:33.826 23:12:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:30:33.826 23:12:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:30:33.826 23:12:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:30:33.826 23:12:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:30:33.826 23:12:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:30:33.826 23:12:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:30:33.826 23:12:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:30:33.826 23:12:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:30:33.826 23:12:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:30:33.826 23:12:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:30:33.826 23:12:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:33.826 1+0 records in 00:30:33.826 1+0 records out 00:30:33.826 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000417325 s, 9.8 MB/s 00:30:33.826 23:12:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:33.826 23:12:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:30:33.826 23:12:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:33.826 23:12:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:30:33.826 23:12:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:30:33.826 23:12:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:33.826 23:12:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:30:33.826 23:12:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:30:34.086 /dev/nbd13 00:30:34.086 23:12:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:30:34.086 23:12:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:30:34.086 23:12:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:30:34.086 23:12:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:30:34.086 23:12:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:30:34.086 23:12:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:30:34.086 23:12:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:30:34.086 23:12:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:30:34.086 23:12:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:30:34.086 23:12:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:30:34.086 23:12:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:34.086 1+0 records in 00:30:34.086 1+0 records out 00:30:34.086 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000393224 s, 10.4 MB/s 00:30:34.086 23:12:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:34.087 23:12:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:30:34.087 23:12:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:34.087 23:12:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:30:34.087 23:12:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:30:34.087 23:12:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:34.087 23:12:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:30:34.087 23:12:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:30:34.087 23:12:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:34.087 23:12:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:30:34.348 23:12:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:30:34.348 { 00:30:34.348 "nbd_device": "/dev/nbd0", 00:30:34.348 "bdev_name": "nvme0n1" 00:30:34.348 }, 00:30:34.348 { 00:30:34.348 "nbd_device": "/dev/nbd1", 00:30:34.348 "bdev_name": "nvme0n2" 00:30:34.348 }, 00:30:34.348 { 00:30:34.348 "nbd_device": "/dev/nbd10", 00:30:34.348 "bdev_name": "nvme0n3" 00:30:34.348 }, 00:30:34.348 { 00:30:34.348 "nbd_device": "/dev/nbd11", 00:30:34.348 "bdev_name": "nvme1n1" 00:30:34.348 }, 00:30:34.348 { 00:30:34.348 "nbd_device": "/dev/nbd12", 00:30:34.348 "bdev_name": "nvme2n1" 00:30:34.348 }, 00:30:34.348 { 00:30:34.348 "nbd_device": "/dev/nbd13", 00:30:34.348 "bdev_name": "nvme3n1" 00:30:34.348 } 00:30:34.348 ]' 00:30:34.348 23:12:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:30:34.348 { 00:30:34.348 "nbd_device": "/dev/nbd0", 00:30:34.348 "bdev_name": "nvme0n1" 00:30:34.348 }, 00:30:34.348 { 00:30:34.348 "nbd_device": "/dev/nbd1", 00:30:34.348 "bdev_name": "nvme0n2" 00:30:34.348 }, 00:30:34.348 { 00:30:34.348 "nbd_device": "/dev/nbd10", 00:30:34.348 "bdev_name": "nvme0n3" 00:30:34.348 }, 00:30:34.348 { 00:30:34.348 "nbd_device": "/dev/nbd11", 00:30:34.348 "bdev_name": "nvme1n1" 00:30:34.348 }, 00:30:34.348 { 00:30:34.348 "nbd_device": "/dev/nbd12", 00:30:34.348 "bdev_name": "nvme2n1" 00:30:34.348 }, 00:30:34.348 { 00:30:34.348 "nbd_device": "/dev/nbd13", 00:30:34.348 "bdev_name": "nvme3n1" 00:30:34.348 } 00:30:34.348 ]' 00:30:34.348 23:12:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:30:34.348 23:12:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:30:34.348 /dev/nbd1 00:30:34.348 /dev/nbd10 00:30:34.348 /dev/nbd11 00:30:34.348 /dev/nbd12 00:30:34.348 /dev/nbd13' 00:30:34.348 23:12:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:30:34.348 /dev/nbd1 00:30:34.348 /dev/nbd10 00:30:34.348 /dev/nbd11 00:30:34.348 /dev/nbd12 00:30:34.348 /dev/nbd13' 00:30:34.348 23:12:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:30:34.348 23:12:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:30:34.348 23:12:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:30:34.348 23:12:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:30:34.348 23:12:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:30:34.348 23:12:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:30:34.348 23:12:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:30:34.348 23:12:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:30:34.348 23:12:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:30:34.348 23:12:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:30:34.348 23:12:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:30:34.348 23:12:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:30:34.348 256+0 records in 00:30:34.348 256+0 records out 00:30:34.348 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00707871 s, 148 MB/s 00:30:34.348 23:12:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:30:34.348 23:12:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:30:34.348 256+0 records in 00:30:34.348 256+0 records out 00:30:34.348 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.060085 s, 17.5 MB/s 00:30:34.348 23:12:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:30:34.348 23:12:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:30:34.348 256+0 records in 00:30:34.348 256+0 records out 00:30:34.348 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0620598 s, 16.9 MB/s 00:30:34.348 23:12:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:30:34.348 23:12:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:30:34.609 256+0 records in 00:30:34.609 256+0 records out 00:30:34.609 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0696654 s, 15.1 MB/s 00:30:34.609 23:12:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:30:34.609 23:12:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:30:34.609 256+0 records in 00:30:34.609 256+0 records out 00:30:34.609 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0643094 s, 16.3 MB/s 00:30:34.609 23:12:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:30:34.609 23:12:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:30:34.609 256+0 records in 00:30:34.609 256+0 records out 00:30:34.609 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0787347 s, 13.3 MB/s 00:30:34.609 23:12:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:30:34.609 23:12:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:30:34.609 256+0 records in 00:30:34.609 256+0 records out 00:30:34.609 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0640931 s, 16.4 MB/s 00:30:34.609 23:12:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:30:34.609 23:12:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:30:34.609 23:12:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:30:34.609 23:12:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:30:34.609 23:12:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:30:34.609 23:12:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:30:34.609 23:12:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:30:34.609 23:12:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:30:34.609 23:12:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:30:34.609 23:12:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:30:34.609 23:12:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:30:34.609 23:12:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:30:34.609 23:12:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:30:34.609 23:12:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:30:34.609 23:12:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:30:34.609 23:12:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:30:34.609 23:12:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:30:34.869 23:12:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:30:34.869 23:12:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:30:34.869 23:12:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:30:34.869 23:12:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:30:34.869 23:12:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:34.869 23:12:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:30:34.869 23:12:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:30:34.869 23:12:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:30:34.869 23:12:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:34.869 23:12:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:30:34.869 23:12:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:30:34.869 23:12:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:30:34.869 23:12:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:30:34.869 23:12:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:34.869 23:12:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:34.869 23:12:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:30:34.869 23:12:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:30:34.869 23:12:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:30:34.869 23:12:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:34.869 23:12:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:30:35.130 23:12:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:30:35.130 23:12:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:30:35.130 23:12:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:30:35.130 23:12:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:35.130 23:12:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:35.130 23:12:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:30:35.130 23:12:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:30:35.130 23:12:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:30:35.130 23:12:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:35.130 23:12:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:30:35.392 23:12:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:30:35.392 23:12:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:30:35.392 23:12:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:30:35.392 23:12:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:35.392 23:12:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:35.392 23:12:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:30:35.392 23:12:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:30:35.392 23:12:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:30:35.392 23:12:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:35.392 23:12:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:30:35.653 23:12:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:30:35.653 23:12:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:30:35.653 23:12:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:30:35.653 23:12:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:35.653 23:12:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:35.653 23:12:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:30:35.653 23:12:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:30:35.653 23:12:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:30:35.653 23:12:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:35.653 23:12:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:30:35.653 23:12:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:30:35.653 23:12:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:30:35.653 23:12:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:30:35.653 23:12:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:35.653 23:12:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:35.653 23:12:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:30:35.653 23:12:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:30:35.653 23:12:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:30:35.653 23:12:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:35.653 23:12:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:30:35.913 23:12:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:30:35.913 23:12:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:30:35.913 23:12:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:30:35.913 23:12:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:35.913 23:12:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:35.913 23:12:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:30:35.913 23:12:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:30:35.913 23:12:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:30:35.913 23:12:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:30:35.913 23:12:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:35.913 23:12:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:30:36.176 23:12:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:30:36.176 23:12:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:30:36.176 23:12:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:30:36.176 23:12:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:30:36.176 23:12:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:30:36.176 23:12:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:30:36.176 23:12:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:30:36.176 23:12:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:30:36.176 23:12:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:30:36.176 23:12:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:30:36.176 23:12:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:30:36.176 23:12:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:30:36.176 23:12:14 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:30:36.176 23:12:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:36.176 23:12:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:30:36.176 23:12:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:30:36.437 malloc_lvol_verify 00:30:36.437 23:12:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:30:36.698 0032937a-4079-479f-94ca-8fb2bed4f719 00:30:36.698 23:12:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:30:36.698 5beb33cb-1fa5-4b69-b7d0-8c9663224e4e 00:30:36.698 23:12:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:30:36.960 /dev/nbd0 00:30:36.960 23:12:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:30:36.960 23:12:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:30:36.960 23:12:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:30:36.960 23:12:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:30:36.960 23:12:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:30:36.960 mke2fs 1.47.0 (5-Feb-2023) 00:30:36.960 Discarding device blocks: 0/4096 done 00:30:36.960 Creating filesystem with 4096 1k blocks and 1024 inodes 00:30:36.960 00:30:36.960 Allocating group tables: 0/1 done 00:30:36.960 Writing inode tables: 0/1 done 00:30:36.960 Creating journal (1024 blocks): done 00:30:36.960 Writing superblocks and filesystem accounting information: 0/1 done 00:30:36.960 00:30:36.960 23:12:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:30:36.960 23:12:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:36.960 23:12:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:30:36.960 23:12:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:30:36.960 23:12:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:30:36.960 23:12:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:36.960 23:12:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:30:37.222 23:12:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:30:37.222 23:12:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:30:37.222 23:12:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:30:37.222 23:12:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:37.222 23:12:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:37.222 23:12:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:30:37.222 23:12:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:30:37.222 23:12:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:30:37.222 23:12:15 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 72646 00:30:37.222 23:12:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 72646 ']' 00:30:37.222 23:12:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 72646 00:30:37.222 23:12:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:30:37.222 23:12:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:37.222 23:12:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72646 00:30:37.222 23:12:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:37.222 23:12:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:37.222 killing process with pid 72646 00:30:37.222 23:12:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72646' 00:30:37.222 23:12:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 72646 00:30:37.222 23:12:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 72646 00:30:37.796 23:12:16 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:30:37.796 00:30:37.796 real 0m9.246s 00:30:37.796 user 0m13.185s 00:30:37.796 sys 0m3.103s 00:30:37.796 23:12:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:37.796 23:12:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:30:37.796 ************************************ 00:30:37.796 END TEST bdev_nbd 00:30:37.796 ************************************ 00:30:38.054 23:12:16 blockdev_xnvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:30:38.054 23:12:16 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = nvme ']' 00:30:38.054 23:12:16 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = gpt ']' 00:30:38.054 23:12:16 blockdev_xnvme -- bdev/blockdev.sh@805 -- # run_test bdev_fio fio_test_suite '' 00:30:38.054 23:12:16 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:38.054 23:12:16 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:38.054 23:12:16 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:30:38.054 ************************************ 00:30:38.054 START TEST bdev_fio 00:30:38.054 ************************************ 00:30:38.054 23:12:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:30:38.054 23:12:16 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:30:38.054 23:12:16 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:30:38.054 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:30:38.054 23:12:16 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:30:38.054 23:12:16 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:30:38.054 23:12:16 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:30:38.054 23:12:16 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:30:38.054 23:12:16 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:30:38.054 23:12:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:30:38.054 23:12:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:30:38.054 23:12:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:30:38.054 23:12:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:30:38.054 23:12:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:30:38.054 23:12:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:30:38.054 23:12:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:30:38.054 23:12:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:30:38.054 23:12:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:30:38.054 23:12:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:30:38.054 23:12:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:30:38.054 23:12:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:30:38.054 23:12:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:30:38.054 23:12:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:30:38.054 23:12:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:30:38.054 23:12:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:30:38.054 23:12:16 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:30:38.054 23:12:16 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:30:38.054 23:12:16 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:30:38.054 23:12:16 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:30:38.054 23:12:16 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n2]' 00:30:38.054 23:12:16 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n2 00:30:38.054 23:12:16 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:30:38.054 23:12:16 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n3]' 00:30:38.055 23:12:16 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n3 00:30:38.055 23:12:16 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:30:38.055 23:12:16 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:30:38.055 23:12:16 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:30:38.055 23:12:16 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:30:38.055 23:12:16 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:30:38.055 23:12:16 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:30:38.055 23:12:16 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:30:38.055 23:12:16 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:30:38.055 23:12:16 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:30:38.055 23:12:16 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:30:38.055 23:12:16 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:30:38.055 23:12:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:30:38.055 23:12:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:38.055 23:12:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:30:38.055 ************************************ 00:30:38.055 START TEST bdev_fio_rw_verify 00:30:38.055 ************************************ 00:30:38.055 23:12:16 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:30:38.055 23:12:16 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:30:38.055 23:12:16 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:30:38.055 23:12:16 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:38.055 23:12:16 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:30:38.055 23:12:16 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:38.055 23:12:16 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:30:38.055 23:12:16 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:30:38.055 23:12:16 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:30:38.055 23:12:16 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:30:38.055 23:12:16 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:38.055 23:12:16 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:30:38.055 23:12:16 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:30:38.055 23:12:16 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:30:38.055 23:12:16 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:30:38.055 23:12:16 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:30:38.055 23:12:16 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:30:38.055 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:30:38.055 job_nvme0n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:30:38.055 job_nvme0n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:30:38.055 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:30:38.055 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:30:38.055 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:30:38.055 fio-3.35 00:30:38.055 Starting 6 threads 00:30:50.242 00:30:50.242 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=73039: Mon Dec 9 23:12:27 2024 00:30:50.242 read: IOPS=42.3k, BW=165MiB/s (173MB/s)(1653MiB/10001msec) 00:30:50.242 slat (usec): min=2, max=743, avg= 4.74, stdev= 3.60 00:30:50.242 clat (usec): min=63, max=5569, avg=381.86, stdev=199.62 00:30:50.242 lat (usec): min=66, max=5573, avg=386.60, stdev=200.00 00:30:50.242 clat percentiles (usec): 00:30:50.242 | 50.000th=[ 359], 99.000th=[ 930], 99.900th=[ 1434], 99.990th=[ 4080], 00:30:50.242 | 99.999th=[ 5538] 00:30:50.242 write: IOPS=42.8k, BW=167MiB/s (175MB/s)(1672MiB/10001msec); 0 zone resets 00:30:50.242 slat (usec): min=6, max=1735, avg=24.13, stdev=36.44 00:30:50.242 clat (usec): min=59, max=5726, avg=525.01, stdev=228.29 00:30:50.242 lat (usec): min=73, max=5741, avg=549.15, stdev=233.15 00:30:50.242 clat percentiles (usec): 00:30:50.242 | 50.000th=[ 498], 99.000th=[ 1205], 99.900th=[ 1614], 99.990th=[ 3589], 00:30:50.242 | 99.999th=[ 5669] 00:30:50.242 bw ( KiB/s): min=151440, max=194688, per=99.88%, avg=170962.32, stdev=2064.96, samples=114 00:30:50.242 iops : min=37858, max=48672, avg=42739.47, stdev=516.23, samples=114 00:30:50.242 lat (usec) : 100=0.13%, 250=17.47%, 500=46.23%, 750=27.22%, 1000=7.02% 00:30:50.242 lat (msec) : 2=1.88%, 4=0.04%, 10=0.01% 00:30:50.242 cpu : usr=46.70%, sys=34.07%, ctx=10575, majf=0, minf=33928 00:30:50.242 IO depths : 1=11.5%, 2=23.7%, 4=51.2%, 8=13.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:50.242 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:50.242 complete : 0=0.0%, 4=89.2%, 8=10.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:50.242 issued rwts: total=423102,427948,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:50.242 latency : target=0, window=0, percentile=100.00%, depth=8 00:30:50.242 00:30:50.242 Run status group 0 (all jobs): 00:30:50.242 READ: bw=165MiB/s (173MB/s), 165MiB/s-165MiB/s (173MB/s-173MB/s), io=1653MiB (1733MB), run=10001-10001msec 00:30:50.242 WRITE: bw=167MiB/s (175MB/s), 167MiB/s-167MiB/s (175MB/s-175MB/s), io=1672MiB (1753MB), run=10001-10001msec 00:30:50.242 ----------------------------------------------------- 00:30:50.242 Suppressions used: 00:30:50.242 count bytes template 00:30:50.242 6 48 /usr/src/fio/parse.c 00:30:50.242 4565 438240 /usr/src/fio/iolog.c 00:30:50.242 1 8 libtcmalloc_minimal.so 00:30:50.242 1 904 libcrypto.so 00:30:50.242 ----------------------------------------------------- 00:30:50.242 00:30:50.242 00:30:50.242 real 0m11.803s 00:30:50.242 user 0m29.433s 00:30:50.242 sys 0m20.699s 00:30:50.242 23:12:28 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:50.242 23:12:28 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:30:50.242 ************************************ 00:30:50.242 END TEST bdev_fio_rw_verify 00:30:50.242 ************************************ 00:30:50.242 23:12:28 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:30:50.242 23:12:28 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:30:50.243 23:12:28 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:30:50.243 23:12:28 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:30:50.243 23:12:28 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:30:50.243 23:12:28 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:30:50.243 23:12:28 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:30:50.243 23:12:28 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:30:50.243 23:12:28 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:30:50.243 23:12:28 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:30:50.243 23:12:28 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:30:50.243 23:12:28 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:30:50.243 23:12:28 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:30:50.243 23:12:28 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:30:50.243 23:12:28 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:30:50.243 23:12:28 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:30:50.243 23:12:28 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "754fbc56-c02f-49a9-84c2-e10f26347799"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "754fbc56-c02f-49a9-84c2-e10f26347799",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "22c4bf3d-af4a-4911-a440-00e91f48e150"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "22c4bf3d-af4a-4911-a440-00e91f48e150",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "70472628-46de-40c1-96b0-ac34cac6ec68"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "70472628-46de-40c1-96b0-ac34cac6ec68",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "af20d48a-7364-4a63-97db-3f42b4088fdd"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "af20d48a-7364-4a63-97db-3f42b4088fdd",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "274afcc1-a31f-40f9-a269-78d5f5603aa4"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "274afcc1-a31f-40f9-a269-78d5f5603aa4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "569f2251-bde7-4dab-aeb5-c8100e834027"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "569f2251-bde7-4dab-aeb5-c8100e834027",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:30:50.243 23:12:28 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:30:50.243 23:12:28 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:30:50.243 23:12:28 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:30:50.243 23:12:28 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:30:50.243 /home/vagrant/spdk_repo/spdk 00:30:50.243 23:12:28 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:30:50.243 23:12:28 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:30:50.243 00:30:50.243 real 0m11.939s 00:30:50.243 user 0m29.500s 00:30:50.243 sys 0m20.769s 00:30:50.243 23:12:28 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:50.243 23:12:28 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:30:50.243 ************************************ 00:30:50.243 END TEST bdev_fio 00:30:50.243 ************************************ 00:30:50.243 23:12:28 blockdev_xnvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:30:50.243 23:12:28 blockdev_xnvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:30:50.243 23:12:28 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:30:50.243 23:12:28 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:50.243 23:12:28 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:30:50.243 ************************************ 00:30:50.243 START TEST bdev_verify 00:30:50.243 ************************************ 00:30:50.243 23:12:28 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:30:50.243 [2024-12-09 23:12:28.308288] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:30:50.243 [2024-12-09 23:12:28.308402] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73215 ] 00:30:50.243 [2024-12-09 23:12:28.465080] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:50.243 [2024-12-09 23:12:28.566445] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:50.243 [2024-12-09 23:12:28.566641] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:50.502 Running I/O for 5 seconds... 00:30:52.850 23392.00 IOPS, 91.38 MiB/s [2024-12-09T23:12:32.267Z] 24000.00 IOPS, 93.75 MiB/s [2024-12-09T23:12:33.200Z] 24298.00 IOPS, 94.91 MiB/s [2024-12-09T23:12:34.133Z] 23727.50 IOPS, 92.69 MiB/s [2024-12-09T23:12:34.134Z] 23450.00 IOPS, 91.60 MiB/s 00:30:55.672 Latency(us) 00:30:55.672 [2024-12-09T23:12:34.134Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:55.672 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:55.672 Verification LBA range: start 0x0 length 0x80000 00:30:55.672 nvme0n1 : 5.02 1656.31 6.47 0.00 0.00 77119.04 11544.42 76223.41 00:30:55.672 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:30:55.672 Verification LBA range: start 0x80000 length 0x80000 00:30:55.672 nvme0n1 : 5.07 1716.22 6.70 0.00 0.00 74441.16 12905.55 70173.93 00:30:55.672 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:55.672 Verification LBA range: start 0x0 length 0x80000 00:30:55.672 nvme0n2 : 5.07 1664.69 6.50 0.00 0.00 76559.91 13409.67 69367.34 00:30:55.672 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:30:55.672 Verification LBA range: start 0x80000 length 0x80000 00:30:55.672 nvme0n2 : 5.03 1703.83 6.66 0.00 0.00 74834.46 15627.82 62107.96 00:30:55.672 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:55.672 Verification LBA range: start 0x0 length 0x80000 00:30:55.672 nvme0n3 : 5.04 1649.99 6.45 0.00 0.00 77059.44 10939.47 68560.74 00:30:55.672 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:30:55.672 Verification LBA range: start 0x80000 length 0x80000 00:30:55.672 nvme0n3 : 5.08 1712.81 6.69 0.00 0.00 74290.59 7965.14 68964.04 00:30:55.672 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:55.672 Verification LBA range: start 0x0 length 0x20000 00:30:55.672 nvme1n1 : 5.08 1664.14 6.50 0.00 0.00 76237.43 14115.45 60091.47 00:30:55.672 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:30:55.672 Verification LBA range: start 0x20000 length 0x20000 00:30:55.672 nvme1n1 : 5.07 1715.57 6.70 0.00 0.00 74021.75 15022.87 67754.14 00:30:55.672 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:55.672 Verification LBA range: start 0x0 length 0xbd0bd 00:30:55.672 nvme2n1 : 5.08 3079.24 12.03 0.00 0.00 41045.57 3831.34 61704.66 00:30:55.672 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:30:55.672 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:30:55.672 nvme2n1 : 5.08 3297.39 12.88 0.00 0.00 38359.40 3428.04 61301.37 00:30:55.672 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:55.672 Verification LBA range: start 0x0 length 0xa0000 00:30:55.672 nvme3n1 : 5.09 1660.18 6.49 0.00 0.00 76090.10 13308.85 75416.81 00:30:55.672 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:30:55.672 Verification LBA range: start 0xa0000 length 0xa0000 00:30:55.672 nvme3n1 : 5.09 1659.26 6.48 0.00 0.00 76129.52 4738.76 106470.79 00:30:55.672 [2024-12-09T23:12:34.134Z] =================================================================================================================== 00:30:55.672 [2024-12-09T23:12:34.134Z] Total : 23179.63 90.55 0.00 0.00 65737.20 3428.04 106470.79 00:30:56.605 00:30:56.605 real 0m6.564s 00:30:56.605 user 0m10.443s 00:30:56.605 sys 0m1.705s 00:30:56.605 23:12:34 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:56.605 23:12:34 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:30:56.605 ************************************ 00:30:56.605 END TEST bdev_verify 00:30:56.605 ************************************ 00:30:56.605 23:12:34 blockdev_xnvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:30:56.605 23:12:34 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:30:56.605 23:12:34 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:56.605 23:12:34 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:30:56.605 ************************************ 00:30:56.605 START TEST bdev_verify_big_io 00:30:56.605 ************************************ 00:30:56.605 23:12:34 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:30:56.605 [2024-12-09 23:12:34.914119] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:30:56.605 [2024-12-09 23:12:34.914249] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73318 ] 00:30:56.865 [2024-12-09 23:12:35.075104] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:56.865 [2024-12-09 23:12:35.173526] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:56.865 [2024-12-09 23:12:35.173538] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:57.432 Running I/O for 5 seconds... 00:31:03.531 1376.00 IOPS, 86.00 MiB/s [2024-12-09T23:12:41.993Z] 2427.00 IOPS, 151.69 MiB/s [2024-12-09T23:12:42.251Z] 2680.67 IOPS, 167.54 MiB/s 00:31:03.789 Latency(us) 00:31:03.789 [2024-12-09T23:12:42.251Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:03.789 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:31:03.789 Verification LBA range: start 0x0 length 0x8000 00:31:03.789 nvme0n1 : 5.48 81.78 5.11 0.00 0.00 1484829.71 190356.87 1832588.21 00:31:03.789 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:31:03.789 Verification LBA range: start 0x8000 length 0x8000 00:31:03.789 nvme0n1 : 5.94 129.23 8.08 0.00 0.00 930701.64 5545.35 1025991.29 00:31:03.789 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:31:03.789 Verification LBA range: start 0x0 length 0x8000 00:31:03.789 nvme0n2 : 6.10 73.41 4.59 0.00 0.00 1577257.97 94371.84 1961643.72 00:31:03.789 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:31:03.789 Verification LBA range: start 0x8000 length 0x8000 00:31:03.789 nvme0n2 : 5.84 106.24 6.64 0.00 0.00 1140980.13 75820.11 2181038.08 00:31:03.789 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:31:03.789 Verification LBA range: start 0x0 length 0x8000 00:31:03.789 nvme0n3 : 6.15 122.24 7.64 0.00 0.00 886777.16 86305.87 1122782.92 00:31:03.789 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:31:03.789 Verification LBA range: start 0x8000 length 0x8000 00:31:03.789 nvme0n3 : 5.95 125.06 7.82 0.00 0.00 940282.77 77030.01 800144.15 00:31:03.789 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:31:03.789 Verification LBA range: start 0x0 length 0x2000 00:31:03.789 nvme1n1 : 6.23 123.35 7.71 0.00 0.00 838863.80 4587.52 967916.31 00:31:03.789 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:31:03.789 Verification LBA range: start 0x2000 length 0x2000 00:31:03.789 nvme1n1 : 5.95 104.86 6.55 0.00 0.00 1082524.81 105664.20 2426243.54 00:31:03.789 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:31:03.789 Verification LBA range: start 0x0 length 0xbd0b 00:31:03.789 nvme2n1 : 6.32 220.18 13.76 0.00 0.00 450745.50 3213.78 864671.90 00:31:03.789 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:31:03.789 Verification LBA range: start 0xbd0b length 0xbd0b 00:31:03.789 nvme2n1 : 5.95 123.79 7.74 0.00 0.00 875563.08 14115.45 1535760.54 00:31:03.789 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:31:03.789 Verification LBA range: start 0x0 length 0xa000 00:31:03.789 nvme3n1 : 6.55 242.65 15.17 0.00 0.00 390807.81 259.94 2297188.04 00:31:03.789 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:31:03.789 Verification LBA range: start 0xa000 length 0xa000 00:31:03.789 nvme3n1 : 5.96 136.99 8.56 0.00 0.00 785360.90 2797.88 1013085.74 00:31:03.789 [2024-12-09T23:12:42.251Z] =================================================================================================================== 00:31:03.789 [2024-12-09T23:12:42.251Z] Total : 1589.77 99.36 0.00 0.00 825701.89 259.94 2426243.54 00:31:04.771 00:31:04.771 real 0m8.219s 00:31:04.771 user 0m15.292s 00:31:04.771 sys 0m0.384s 00:31:04.771 23:12:43 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:04.772 23:12:43 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:31:04.772 ************************************ 00:31:04.772 END TEST bdev_verify_big_io 00:31:04.772 ************************************ 00:31:04.772 23:12:43 blockdev_xnvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:31:04.772 23:12:43 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:31:04.772 23:12:43 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:04.772 23:12:43 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:31:04.772 ************************************ 00:31:04.772 START TEST bdev_write_zeroes 00:31:04.772 ************************************ 00:31:04.772 23:12:43 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:31:04.772 [2024-12-09 23:12:43.173133] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:31:04.772 [2024-12-09 23:12:43.173275] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73433 ] 00:31:05.028 [2024-12-09 23:12:43.330943] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:05.028 [2024-12-09 23:12:43.430007] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:05.592 Running I/O for 1 seconds... 00:31:06.524 78176.00 IOPS, 305.38 MiB/s 00:31:06.524 Latency(us) 00:31:06.524 [2024-12-09T23:12:44.986Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:06.524 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:31:06.524 nvme0n1 : 1.03 11211.01 43.79 0.00 0.00 11407.08 6553.60 22080.59 00:31:06.524 Job: nvme0n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:31:06.524 nvme0n2 : 1.03 11198.49 43.74 0.00 0.00 11411.55 6604.01 21979.77 00:31:06.524 Job: nvme0n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:31:06.524 nvme0n3 : 1.03 11185.18 43.69 0.00 0.00 11416.34 6553.60 21979.77 00:31:06.524 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:31:06.524 nvme1n1 : 1.03 11172.56 43.64 0.00 0.00 11421.32 6553.60 21979.77 00:31:06.524 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:31:06.524 nvme2n1 : 1.03 21412.93 83.64 0.00 0.00 5952.30 2129.92 22181.42 00:31:06.524 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:31:06.524 nvme3n1 : 1.03 11223.72 43.84 0.00 0.00 11311.13 4259.84 23592.96 00:31:06.524 [2024-12-09T23:12:44.986Z] =================================================================================================================== 00:31:06.524 [2024-12-09T23:12:44.987Z] Total : 77403.89 302.36 0.00 0.00 9892.02 2129.92 23592.96 00:31:07.091 00:31:07.091 real 0m2.433s 00:31:07.091 user 0m1.669s 00:31:07.091 sys 0m0.590s 00:31:07.091 23:12:45 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:07.091 23:12:45 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:31:07.091 ************************************ 00:31:07.091 END TEST bdev_write_zeroes 00:31:07.091 ************************************ 00:31:07.349 23:12:45 blockdev_xnvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:31:07.349 23:12:45 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:31:07.349 23:12:45 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:07.349 23:12:45 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:31:07.349 ************************************ 00:31:07.349 START TEST bdev_json_nonenclosed 00:31:07.349 ************************************ 00:31:07.349 23:12:45 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:31:07.349 [2024-12-09 23:12:45.645702] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:31:07.349 [2024-12-09 23:12:45.645821] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73477 ] 00:31:07.349 [2024-12-09 23:12:45.806768] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:07.607 [2024-12-09 23:12:45.905127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:07.607 [2024-12-09 23:12:45.905343] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:31:07.607 [2024-12-09 23:12:45.905367] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:31:07.607 [2024-12-09 23:12:45.905376] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:31:07.864 00:31:07.864 real 0m0.496s 00:31:07.864 user 0m0.297s 00:31:07.864 sys 0m0.096s 00:31:07.864 ************************************ 00:31:07.864 END TEST bdev_json_nonenclosed 00:31:07.864 ************************************ 00:31:07.864 23:12:46 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:07.864 23:12:46 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:31:07.864 23:12:46 blockdev_xnvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:31:07.864 23:12:46 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:31:07.864 23:12:46 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:07.864 23:12:46 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:31:07.864 ************************************ 00:31:07.864 START TEST bdev_json_nonarray 00:31:07.864 ************************************ 00:31:07.864 23:12:46 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:31:07.864 [2024-12-09 23:12:46.188901] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:31:07.864 [2024-12-09 23:12:46.189156] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73503 ] 00:31:08.122 [2024-12-09 23:12:46.343672] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:08.122 [2024-12-09 23:12:46.442765] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:08.122 [2024-12-09 23:12:46.442847] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:31:08.122 [2024-12-09 23:12:46.442863] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:31:08.122 [2024-12-09 23:12:46.442873] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:31:08.380 00:31:08.380 real 0m0.493s 00:31:08.380 user 0m0.303s 00:31:08.380 sys 0m0.085s 00:31:08.380 23:12:46 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:08.380 ************************************ 00:31:08.380 END TEST bdev_json_nonarray 00:31:08.380 ************************************ 00:31:08.380 23:12:46 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:31:08.380 23:12:46 blockdev_xnvme -- bdev/blockdev.sh@824 -- # [[ xnvme == bdev ]] 00:31:08.380 23:12:46 blockdev_xnvme -- bdev/blockdev.sh@832 -- # [[ xnvme == gpt ]] 00:31:08.380 23:12:46 blockdev_xnvme -- bdev/blockdev.sh@836 -- # [[ xnvme == crypto_sw ]] 00:31:08.380 23:12:46 blockdev_xnvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:31:08.380 23:12:46 blockdev_xnvme -- bdev/blockdev.sh@849 -- # cleanup 00:31:08.380 23:12:46 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:31:08.380 23:12:46 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:31:08.380 23:12:46 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:31:08.380 23:12:46 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:31:08.380 23:12:46 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:31:08.380 23:12:46 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:31:08.380 23:12:46 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:31:08.638 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:55.298 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:31:55.298 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:32:01.849 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:32:01.849 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:32:01.849 ************************************ 00:32:01.849 END TEST blockdev_xnvme 00:32:01.849 ************************************ 00:32:01.849 00:32:01.849 real 1m40.075s 00:32:01.849 user 1m24.061s 00:32:01.849 sys 2m42.532s 00:32:01.849 23:13:40 blockdev_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:01.849 23:13:40 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:32:01.849 23:13:40 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:32:01.849 23:13:40 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:01.849 23:13:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:01.849 23:13:40 -- common/autotest_common.sh@10 -- # set +x 00:32:01.849 ************************************ 00:32:01.849 START TEST ublk 00:32:01.849 ************************************ 00:32:01.849 23:13:40 ublk -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:32:02.107 * Looking for test storage... 00:32:02.107 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:32:02.107 23:13:40 ublk -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:02.107 23:13:40 ublk -- common/autotest_common.sh@1711 -- # lcov --version 00:32:02.107 23:13:40 ublk -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:02.107 23:13:40 ublk -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:02.107 23:13:40 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:02.107 23:13:40 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:02.107 23:13:40 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:02.107 23:13:40 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:32:02.107 23:13:40 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:32:02.107 23:13:40 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:32:02.107 23:13:40 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:32:02.107 23:13:40 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:32:02.107 23:13:40 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:32:02.107 23:13:40 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:32:02.107 23:13:40 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:02.107 23:13:40 ublk -- scripts/common.sh@344 -- # case "$op" in 00:32:02.107 23:13:40 ublk -- scripts/common.sh@345 -- # : 1 00:32:02.107 23:13:40 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:02.107 23:13:40 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:02.107 23:13:40 ublk -- scripts/common.sh@365 -- # decimal 1 00:32:02.107 23:13:40 ublk -- scripts/common.sh@353 -- # local d=1 00:32:02.107 23:13:40 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:02.107 23:13:40 ublk -- scripts/common.sh@355 -- # echo 1 00:32:02.107 23:13:40 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:32:02.107 23:13:40 ublk -- scripts/common.sh@366 -- # decimal 2 00:32:02.107 23:13:40 ublk -- scripts/common.sh@353 -- # local d=2 00:32:02.107 23:13:40 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:02.107 23:13:40 ublk -- scripts/common.sh@355 -- # echo 2 00:32:02.107 23:13:40 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:32:02.107 23:13:40 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:02.107 23:13:40 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:02.107 23:13:40 ublk -- scripts/common.sh@368 -- # return 0 00:32:02.107 23:13:40 ublk -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:02.107 23:13:40 ublk -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:02.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:02.107 --rc genhtml_branch_coverage=1 00:32:02.107 --rc genhtml_function_coverage=1 00:32:02.107 --rc genhtml_legend=1 00:32:02.107 --rc geninfo_all_blocks=1 00:32:02.107 --rc geninfo_unexecuted_blocks=1 00:32:02.107 00:32:02.107 ' 00:32:02.107 23:13:40 ublk -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:02.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:02.107 --rc genhtml_branch_coverage=1 00:32:02.107 --rc genhtml_function_coverage=1 00:32:02.107 --rc genhtml_legend=1 00:32:02.107 --rc geninfo_all_blocks=1 00:32:02.107 --rc geninfo_unexecuted_blocks=1 00:32:02.107 00:32:02.107 ' 00:32:02.107 23:13:40 ublk -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:02.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:02.108 --rc genhtml_branch_coverage=1 00:32:02.108 --rc genhtml_function_coverage=1 00:32:02.108 --rc genhtml_legend=1 00:32:02.108 --rc geninfo_all_blocks=1 00:32:02.108 --rc geninfo_unexecuted_blocks=1 00:32:02.108 00:32:02.108 ' 00:32:02.108 23:13:40 ublk -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:02.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:02.108 --rc genhtml_branch_coverage=1 00:32:02.108 --rc genhtml_function_coverage=1 00:32:02.108 --rc genhtml_legend=1 00:32:02.108 --rc geninfo_all_blocks=1 00:32:02.108 --rc geninfo_unexecuted_blocks=1 00:32:02.108 00:32:02.108 ' 00:32:02.108 23:13:40 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:32:02.108 23:13:40 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:32:02.108 23:13:40 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:32:02.108 23:13:40 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:32:02.108 23:13:40 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:32:02.108 23:13:40 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:32:02.108 23:13:40 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:32:02.108 23:13:40 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:32:02.108 23:13:40 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:32:02.108 23:13:40 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:32:02.108 23:13:40 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:32:02.108 23:13:40 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:32:02.108 23:13:40 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:32:02.108 23:13:40 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:32:02.108 23:13:40 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:32:02.108 23:13:40 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:32:02.108 23:13:40 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:32:02.108 23:13:40 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:32:02.108 23:13:40 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:32:02.108 23:13:40 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:32:02.108 23:13:40 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:02.108 23:13:40 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:02.108 23:13:40 ublk -- common/autotest_common.sh@10 -- # set +x 00:32:02.108 ************************************ 00:32:02.108 START TEST test_save_ublk_config 00:32:02.108 ************************************ 00:32:02.108 23:13:40 ublk.test_save_ublk_config -- common/autotest_common.sh@1129 -- # test_save_config 00:32:02.108 23:13:40 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:32:02.108 23:13:40 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=73810 00:32:02.108 23:13:40 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:32:02.108 23:13:40 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 73810 00:32:02.108 23:13:40 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 73810 ']' 00:32:02.108 23:13:40 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:02.108 23:13:40 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:32:02.108 23:13:40 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:02.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:02.108 23:13:40 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:02.108 23:13:40 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:02.108 23:13:40 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:32:02.108 [2024-12-09 23:13:40.492911] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:32:02.108 [2024-12-09 23:13:40.493164] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73810 ] 00:32:02.367 [2024-12-09 23:13:40.650545] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:02.367 [2024-12-09 23:13:40.753168] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:02.934 23:13:41 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:02.934 23:13:41 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:32:02.934 23:13:41 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:32:02.934 23:13:41 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:32:02.934 23:13:41 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:02.934 23:13:41 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:32:02.934 [2024-12-09 23:13:41.378249] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:32:02.934 [2024-12-09 23:13:41.379061] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:32:03.192 malloc0 00:32:03.192 [2024-12-09 23:13:41.442356] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:32:03.192 [2024-12-09 23:13:41.442437] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:32:03.192 [2024-12-09 23:13:41.442447] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:32:03.192 [2024-12-09 23:13:41.442454] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:32:03.192 [2024-12-09 23:13:41.450375] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:32:03.192 [2024-12-09 23:13:41.450397] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:32:03.192 [2024-12-09 23:13:41.458242] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:32:03.192 [2024-12-09 23:13:41.458345] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:32:03.192 [2024-12-09 23:13:41.475249] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:32:03.192 0 00:32:03.192 23:13:41 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:03.192 23:13:41 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:32:03.192 23:13:41 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:03.192 23:13:41 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:32:03.450 23:13:41 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:03.450 23:13:41 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:32:03.450 "subsystems": [ 00:32:03.450 { 00:32:03.450 "subsystem": "fsdev", 00:32:03.450 "config": [ 00:32:03.450 { 00:32:03.450 "method": "fsdev_set_opts", 00:32:03.450 "params": { 00:32:03.450 "fsdev_io_pool_size": 65535, 00:32:03.450 "fsdev_io_cache_size": 256 00:32:03.450 } 00:32:03.450 } 00:32:03.450 ] 00:32:03.450 }, 00:32:03.450 { 00:32:03.450 "subsystem": "keyring", 00:32:03.450 "config": [] 00:32:03.450 }, 00:32:03.450 { 00:32:03.450 "subsystem": "iobuf", 00:32:03.450 "config": [ 00:32:03.450 { 00:32:03.450 "method": "iobuf_set_options", 00:32:03.450 "params": { 00:32:03.450 "small_pool_count": 8192, 00:32:03.450 "large_pool_count": 1024, 00:32:03.450 "small_bufsize": 8192, 00:32:03.450 "large_bufsize": 135168, 00:32:03.450 "enable_numa": false 00:32:03.450 } 00:32:03.450 } 00:32:03.450 ] 00:32:03.450 }, 00:32:03.450 { 00:32:03.450 "subsystem": "sock", 00:32:03.450 "config": [ 00:32:03.450 { 00:32:03.450 "method": "sock_set_default_impl", 00:32:03.450 "params": { 00:32:03.450 "impl_name": "posix" 00:32:03.450 } 00:32:03.450 }, 00:32:03.450 { 00:32:03.450 "method": "sock_impl_set_options", 00:32:03.450 "params": { 00:32:03.450 "impl_name": "ssl", 00:32:03.450 "recv_buf_size": 4096, 00:32:03.450 "send_buf_size": 4096, 00:32:03.450 "enable_recv_pipe": true, 00:32:03.450 "enable_quickack": false, 00:32:03.450 "enable_placement_id": 0, 00:32:03.450 "enable_zerocopy_send_server": true, 00:32:03.450 "enable_zerocopy_send_client": false, 00:32:03.450 "zerocopy_threshold": 0, 00:32:03.450 "tls_version": 0, 00:32:03.450 "enable_ktls": false 00:32:03.450 } 00:32:03.450 }, 00:32:03.450 { 00:32:03.450 "method": "sock_impl_set_options", 00:32:03.450 "params": { 00:32:03.450 "impl_name": "posix", 00:32:03.450 "recv_buf_size": 2097152, 00:32:03.450 "send_buf_size": 2097152, 00:32:03.450 "enable_recv_pipe": true, 00:32:03.450 "enable_quickack": false, 00:32:03.450 "enable_placement_id": 0, 00:32:03.450 "enable_zerocopy_send_server": true, 00:32:03.450 "enable_zerocopy_send_client": false, 00:32:03.450 "zerocopy_threshold": 0, 00:32:03.450 "tls_version": 0, 00:32:03.450 "enable_ktls": false 00:32:03.450 } 00:32:03.450 } 00:32:03.450 ] 00:32:03.450 }, 00:32:03.450 { 00:32:03.450 "subsystem": "vmd", 00:32:03.450 "config": [] 00:32:03.450 }, 00:32:03.450 { 00:32:03.450 "subsystem": "accel", 00:32:03.450 "config": [ 00:32:03.450 { 00:32:03.450 "method": "accel_set_options", 00:32:03.450 "params": { 00:32:03.450 "small_cache_size": 128, 00:32:03.450 "large_cache_size": 16, 00:32:03.450 "task_count": 2048, 00:32:03.450 "sequence_count": 2048, 00:32:03.450 "buf_count": 2048 00:32:03.450 } 00:32:03.450 } 00:32:03.450 ] 00:32:03.450 }, 00:32:03.450 { 00:32:03.450 "subsystem": "bdev", 00:32:03.450 "config": [ 00:32:03.450 { 00:32:03.450 "method": "bdev_set_options", 00:32:03.450 "params": { 00:32:03.450 "bdev_io_pool_size": 65535, 00:32:03.450 "bdev_io_cache_size": 256, 00:32:03.450 "bdev_auto_examine": true, 00:32:03.450 "iobuf_small_cache_size": 128, 00:32:03.450 "iobuf_large_cache_size": 16 00:32:03.450 } 00:32:03.450 }, 00:32:03.450 { 00:32:03.450 "method": "bdev_raid_set_options", 00:32:03.450 "params": { 00:32:03.450 "process_window_size_kb": 1024, 00:32:03.450 "process_max_bandwidth_mb_sec": 0 00:32:03.450 } 00:32:03.450 }, 00:32:03.450 { 00:32:03.450 "method": "bdev_iscsi_set_options", 00:32:03.450 "params": { 00:32:03.450 "timeout_sec": 30 00:32:03.450 } 00:32:03.450 }, 00:32:03.450 { 00:32:03.450 "method": "bdev_nvme_set_options", 00:32:03.450 "params": { 00:32:03.450 "action_on_timeout": "none", 00:32:03.450 "timeout_us": 0, 00:32:03.450 "timeout_admin_us": 0, 00:32:03.450 "keep_alive_timeout_ms": 10000, 00:32:03.450 "arbitration_burst": 0, 00:32:03.450 "low_priority_weight": 0, 00:32:03.450 "medium_priority_weight": 0, 00:32:03.450 "high_priority_weight": 0, 00:32:03.450 "nvme_adminq_poll_period_us": 10000, 00:32:03.450 "nvme_ioq_poll_period_us": 0, 00:32:03.450 "io_queue_requests": 0, 00:32:03.450 "delay_cmd_submit": true, 00:32:03.450 "transport_retry_count": 4, 00:32:03.450 "bdev_retry_count": 3, 00:32:03.450 "transport_ack_timeout": 0, 00:32:03.450 "ctrlr_loss_timeout_sec": 0, 00:32:03.450 "reconnect_delay_sec": 0, 00:32:03.450 "fast_io_fail_timeout_sec": 0, 00:32:03.450 "disable_auto_failback": false, 00:32:03.450 "generate_uuids": false, 00:32:03.450 "transport_tos": 0, 00:32:03.450 "nvme_error_stat": false, 00:32:03.450 "rdma_srq_size": 0, 00:32:03.450 "io_path_stat": false, 00:32:03.450 "allow_accel_sequence": false, 00:32:03.450 "rdma_max_cq_size": 0, 00:32:03.450 "rdma_cm_event_timeout_ms": 0, 00:32:03.450 "dhchap_digests": [ 00:32:03.450 "sha256", 00:32:03.450 "sha384", 00:32:03.450 "sha512" 00:32:03.450 ], 00:32:03.450 "dhchap_dhgroups": [ 00:32:03.450 "null", 00:32:03.450 "ffdhe2048", 00:32:03.450 "ffdhe3072", 00:32:03.450 "ffdhe4096", 00:32:03.450 "ffdhe6144", 00:32:03.450 "ffdhe8192" 00:32:03.450 ] 00:32:03.450 } 00:32:03.450 }, 00:32:03.450 { 00:32:03.450 "method": "bdev_nvme_set_hotplug", 00:32:03.450 "params": { 00:32:03.450 "period_us": 100000, 00:32:03.450 "enable": false 00:32:03.450 } 00:32:03.450 }, 00:32:03.450 { 00:32:03.450 "method": "bdev_malloc_create", 00:32:03.450 "params": { 00:32:03.450 "name": "malloc0", 00:32:03.450 "num_blocks": 8192, 00:32:03.450 "block_size": 4096, 00:32:03.450 "physical_block_size": 4096, 00:32:03.450 "uuid": "e5d885cd-a20b-4150-be94-bdcf7eb0b285", 00:32:03.450 "optimal_io_boundary": 0, 00:32:03.450 "md_size": 0, 00:32:03.450 "dif_type": 0, 00:32:03.450 "dif_is_head_of_md": false, 00:32:03.450 "dif_pi_format": 0 00:32:03.450 } 00:32:03.450 }, 00:32:03.450 { 00:32:03.450 "method": "bdev_wait_for_examine" 00:32:03.450 } 00:32:03.450 ] 00:32:03.450 }, 00:32:03.450 { 00:32:03.450 "subsystem": "scsi", 00:32:03.450 "config": null 00:32:03.450 }, 00:32:03.450 { 00:32:03.450 "subsystem": "scheduler", 00:32:03.450 "config": [ 00:32:03.450 { 00:32:03.450 "method": "framework_set_scheduler", 00:32:03.450 "params": { 00:32:03.450 "name": "static" 00:32:03.450 } 00:32:03.450 } 00:32:03.450 ] 00:32:03.450 }, 00:32:03.450 { 00:32:03.450 "subsystem": "vhost_scsi", 00:32:03.450 "config": [] 00:32:03.450 }, 00:32:03.450 { 00:32:03.450 "subsystem": "vhost_blk", 00:32:03.450 "config": [] 00:32:03.450 }, 00:32:03.450 { 00:32:03.450 "subsystem": "ublk", 00:32:03.450 "config": [ 00:32:03.450 { 00:32:03.450 "method": "ublk_create_target", 00:32:03.450 "params": { 00:32:03.450 "cpumask": "1" 00:32:03.450 } 00:32:03.450 }, 00:32:03.450 { 00:32:03.450 "method": "ublk_start_disk", 00:32:03.450 "params": { 00:32:03.450 "bdev_name": "malloc0", 00:32:03.450 "ublk_id": 0, 00:32:03.450 "num_queues": 1, 00:32:03.450 "queue_depth": 128 00:32:03.450 } 00:32:03.450 } 00:32:03.450 ] 00:32:03.450 }, 00:32:03.450 { 00:32:03.450 "subsystem": "nbd", 00:32:03.450 "config": [] 00:32:03.450 }, 00:32:03.450 { 00:32:03.450 "subsystem": "nvmf", 00:32:03.450 "config": [ 00:32:03.450 { 00:32:03.450 "method": "nvmf_set_config", 00:32:03.450 "params": { 00:32:03.450 "discovery_filter": "match_any", 00:32:03.450 "admin_cmd_passthru": { 00:32:03.450 "identify_ctrlr": false 00:32:03.450 }, 00:32:03.450 "dhchap_digests": [ 00:32:03.450 "sha256", 00:32:03.450 "sha384", 00:32:03.450 "sha512" 00:32:03.450 ], 00:32:03.450 "dhchap_dhgroups": [ 00:32:03.450 "null", 00:32:03.450 "ffdhe2048", 00:32:03.450 "ffdhe3072", 00:32:03.450 "ffdhe4096", 00:32:03.450 "ffdhe6144", 00:32:03.450 "ffdhe8192" 00:32:03.450 ] 00:32:03.450 } 00:32:03.450 }, 00:32:03.450 { 00:32:03.450 "method": "nvmf_set_max_subsystems", 00:32:03.450 "params": { 00:32:03.450 "max_subsystems": 1024 00:32:03.450 } 00:32:03.450 }, 00:32:03.450 { 00:32:03.450 "method": "nvmf_set_crdt", 00:32:03.450 "params": { 00:32:03.450 "crdt1": 0, 00:32:03.450 "crdt2": 0, 00:32:03.450 "crdt3": 0 00:32:03.450 } 00:32:03.450 } 00:32:03.450 ] 00:32:03.450 }, 00:32:03.450 { 00:32:03.450 "subsystem": "iscsi", 00:32:03.450 "config": [ 00:32:03.450 { 00:32:03.450 "method": "iscsi_set_options", 00:32:03.450 "params": { 00:32:03.450 "node_base": "iqn.2016-06.io.spdk", 00:32:03.450 "max_sessions": 128, 00:32:03.450 "max_connections_per_session": 2, 00:32:03.450 "max_queue_depth": 64, 00:32:03.450 "default_time2wait": 2, 00:32:03.450 "default_time2retain": 20, 00:32:03.450 "first_burst_length": 8192, 00:32:03.450 "immediate_data": true, 00:32:03.450 "allow_duplicated_isid": false, 00:32:03.450 "error_recovery_level": 0, 00:32:03.450 "nop_timeout": 60, 00:32:03.450 "nop_in_interval": 30, 00:32:03.450 "disable_chap": false, 00:32:03.450 "require_chap": false, 00:32:03.450 "mutual_chap": false, 00:32:03.450 "chap_group": 0, 00:32:03.450 "max_large_datain_per_connection": 64, 00:32:03.450 "max_r2t_per_connection": 4, 00:32:03.450 "pdu_pool_size": 36864, 00:32:03.450 "immediate_data_pool_size": 16384, 00:32:03.450 "data_out_pool_size": 2048 00:32:03.450 } 00:32:03.450 } 00:32:03.450 ] 00:32:03.450 } 00:32:03.450 ] 00:32:03.450 }' 00:32:03.450 23:13:41 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 73810 00:32:03.450 23:13:41 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 73810 ']' 00:32:03.450 23:13:41 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 73810 00:32:03.450 23:13:41 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:32:03.450 23:13:41 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:03.450 23:13:41 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73810 00:32:03.450 killing process with pid 73810 00:32:03.450 23:13:41 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:03.450 23:13:41 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:03.450 23:13:41 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73810' 00:32:03.450 23:13:41 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 73810 00:32:03.450 23:13:41 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 73810 00:32:04.385 [2024-12-09 23:13:42.820520] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:32:04.649 [2024-12-09 23:13:42.847325] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:32:04.649 [2024-12-09 23:13:42.847442] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:32:04.649 [2024-12-09 23:13:42.855257] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:32:04.649 [2024-12-09 23:13:42.855306] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:32:04.649 [2024-12-09 23:13:42.855318] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:32:04.649 [2024-12-09 23:13:42.855351] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:32:04.649 [2024-12-09 23:13:42.855490] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:32:06.021 23:13:44 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=73864 00:32:06.021 23:13:44 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:32:06.021 "subsystems": [ 00:32:06.021 { 00:32:06.021 "subsystem": "fsdev", 00:32:06.021 "config": [ 00:32:06.021 { 00:32:06.021 "method": "fsdev_set_opts", 00:32:06.021 "params": { 00:32:06.021 "fsdev_io_pool_size": 65535, 00:32:06.021 "fsdev_io_cache_size": 256 00:32:06.021 } 00:32:06.021 } 00:32:06.021 ] 00:32:06.021 }, 00:32:06.021 { 00:32:06.021 "subsystem": "keyring", 00:32:06.021 "config": [] 00:32:06.021 }, 00:32:06.021 { 00:32:06.021 "subsystem": "iobuf", 00:32:06.021 "config": [ 00:32:06.021 { 00:32:06.021 "method": "iobuf_set_options", 00:32:06.021 "params": { 00:32:06.021 "small_pool_count": 8192, 00:32:06.021 "large_pool_count": 1024, 00:32:06.021 "small_bufsize": 8192, 00:32:06.021 "large_bufsize": 135168, 00:32:06.021 "enable_numa": false 00:32:06.021 } 00:32:06.021 } 00:32:06.021 ] 00:32:06.021 }, 00:32:06.021 { 00:32:06.021 "subsystem": "sock", 00:32:06.021 "config": [ 00:32:06.021 { 00:32:06.021 "method": "sock_set_default_impl", 00:32:06.021 "params": { 00:32:06.021 "impl_name": "posix" 00:32:06.021 } 00:32:06.021 }, 00:32:06.021 { 00:32:06.021 "method": "sock_impl_set_options", 00:32:06.021 "params": { 00:32:06.021 "impl_name": "ssl", 00:32:06.021 "recv_buf_size": 4096, 00:32:06.021 "send_buf_size": 4096, 00:32:06.021 "enable_recv_pipe": true, 00:32:06.021 "enable_quickack": false, 00:32:06.021 "enable_placement_id": 0, 00:32:06.021 "enable_zerocopy_send_server": true, 00:32:06.021 "enable_zerocopy_send_client": false, 00:32:06.021 "zerocopy_threshold": 0, 00:32:06.021 "tls_version": 0, 00:32:06.021 "enable_ktls": false 00:32:06.022 } 00:32:06.022 }, 00:32:06.022 { 00:32:06.022 "method": "sock_impl_set_options", 00:32:06.022 "params": { 00:32:06.022 "impl_name": "posix", 00:32:06.022 "recv_buf_size": 2097152, 00:32:06.022 "send_buf_size": 2097152, 00:32:06.022 "enable_recv_pipe": true, 00:32:06.022 "enable_quickack": false, 00:32:06.022 "enable_placement_id": 0, 00:32:06.022 "enable_zerocopy_send_server": true, 00:32:06.022 "enable_zerocopy_send_client": false, 00:32:06.022 "zerocopy_threshold": 0, 00:32:06.022 "tls_version": 0, 00:32:06.022 "enable_ktls": false 00:32:06.022 } 00:32:06.022 } 00:32:06.022 ] 00:32:06.022 }, 00:32:06.022 { 00:32:06.022 "subsystem": "vmd", 00:32:06.022 "config": [] 00:32:06.022 }, 00:32:06.022 { 00:32:06.022 "subsystem": "accel", 00:32:06.022 "config": [ 00:32:06.022 { 00:32:06.022 "method": "accel_set_options", 00:32:06.022 "params": { 00:32:06.022 "small_cache_size": 128, 00:32:06.022 "large_cache_size": 16, 00:32:06.022 "task_count": 2048, 00:32:06.022 "sequence_count": 2048, 00:32:06.022 "buf_count": 2048 00:32:06.022 } 00:32:06.022 } 00:32:06.022 ] 00:32:06.022 }, 00:32:06.022 { 00:32:06.022 "subsystem": "bdev", 00:32:06.022 "config": [ 00:32:06.022 { 00:32:06.022 "method": "bdev_set_options", 00:32:06.022 "params": { 00:32:06.022 "bdev_io_pool_size": 65535, 00:32:06.022 "bdev_io_cache_size": 256, 00:32:06.022 "bdev_auto_examine": true, 00:32:06.022 "iobuf_small_cache_size": 128, 00:32:06.022 "iobuf_large_cache_size": 16 00:32:06.022 } 00:32:06.022 }, 00:32:06.022 { 00:32:06.022 "method": "bdev_raid_set_options", 00:32:06.022 "params": { 00:32:06.022 "process_window_size_kb": 1024, 00:32:06.022 "process_max_bandwidth_mb_sec": 0 00:32:06.022 } 00:32:06.022 }, 00:32:06.022 { 00:32:06.022 "method": "bdev_iscsi_set_options", 00:32:06.022 "params": { 00:32:06.022 "timeout_sec": 30 00:32:06.022 } 00:32:06.022 }, 00:32:06.022 { 00:32:06.022 "method": "bdev_nvme_set_options", 00:32:06.022 "params": { 00:32:06.022 "action_on_timeout": "none", 00:32:06.022 "timeout_us": 0, 00:32:06.022 "timeout_admin_us": 0, 00:32:06.022 "keep_alive_timeout_ms": 10000, 00:32:06.022 "arbitration_burst": 0, 00:32:06.022 "low_priority_weight": 0, 00:32:06.022 "medium_priority_weight": 0, 00:32:06.022 "high_priority_weight": 0, 00:32:06.022 "nvme_adminq_poll_period_us": 10000, 00:32:06.022 "nvme_ioq_poll_period_us": 0, 00:32:06.022 "io_queue_requests": 0, 00:32:06.022 "delay_cmd_submit": true, 00:32:06.022 "transport_retry_count": 4, 00:32:06.022 "bdev_retry_count": 3, 00:32:06.022 "transport_ack_timeout": 0, 00:32:06.022 "ctrlr_loss_timeout_sec": 0, 00:32:06.022 "reconnect_delay_sec": 0, 00:32:06.022 "fast_io_fail_timeout_sec": 0, 00:32:06.022 "disable_auto_failback": false, 00:32:06.022 "generate_uuids": false, 00:32:06.022 "transport_tos": 0, 00:32:06.022 "nvme_error_stat": false, 00:32:06.022 "rdma_srq_size": 0, 00:32:06.022 "io_path_stat": false, 00:32:06.022 "allow_accel_sequence": false, 00:32:06.022 "rdma_max_cq_size": 0, 00:32:06.022 "rdma_cm_event_timeout_ms": 0, 00:32:06.022 "dhchap_digests": [ 00:32:06.022 "sha256", 00:32:06.022 "sha384", 00:32:06.022 "sha512" 00:32:06.022 ], 00:32:06.022 "dhchap_dhgroups": [ 00:32:06.022 "null", 00:32:06.022 "ffdhe2048", 00:32:06.022 "ffdhe3072", 00:32:06.022 "ffdhe4096", 00:32:06.022 "ffdhe6144", 00:32:06.022 "ffdhe8192" 00:32:06.022 ] 00:32:06.022 } 00:32:06.022 }, 00:32:06.022 { 00:32:06.022 "method": "bdev_nvme_set_hotplug", 00:32:06.022 "params": { 00:32:06.022 "period_us": 100000, 00:32:06.022 "enable": false 00:32:06.022 } 00:32:06.022 }, 00:32:06.022 { 00:32:06.022 "method": "bdev_malloc_create", 00:32:06.022 "params": { 00:32:06.022 "name": "malloc0", 00:32:06.022 "num_blocks": 8192, 00:32:06.022 "block_size": 4096, 00:32:06.022 "physical_block_size": 4096, 00:32:06.022 "uuid": "e5d885cd-a20b-4150-be94-bdcf7eb0b285", 00:32:06.022 "optimal_io_boundary": 0, 00:32:06.022 "md_size": 0, 00:32:06.022 "dif_type": 0, 00:32:06.022 "dif_is_head_of_md": false, 00:32:06.022 "dif_pi_format": 0 00:32:06.022 } 00:32:06.022 }, 00:32:06.022 { 00:32:06.022 "method": "bdev_wait_for_examine" 00:32:06.022 } 00:32:06.022 ] 00:32:06.022 }, 00:32:06.022 { 00:32:06.022 "subsystem": "scsi", 00:32:06.022 "config": null 00:32:06.022 }, 00:32:06.022 { 00:32:06.022 "subsystem": "scheduler", 00:32:06.022 "config": [ 00:32:06.022 { 00:32:06.022 "method": "framework_set_scheduler", 00:32:06.022 "params": { 00:32:06.022 "name": "static" 00:32:06.022 } 00:32:06.022 } 00:32:06.022 ] 00:32:06.022 }, 00:32:06.022 { 00:32:06.022 "subsystem": "vhost_scsi", 00:32:06.022 "config": [] 00:32:06.022 }, 00:32:06.022 { 00:32:06.022 "subsystem": "vhost_blk", 00:32:06.022 "config": [] 00:32:06.022 }, 00:32:06.022 { 00:32:06.022 "subsystem": "ublk", 00:32:06.022 "config": [ 00:32:06.022 { 00:32:06.022 "method": "ublk_create_target", 00:32:06.022 "params": { 00:32:06.022 "cpumask": "1" 00:32:06.022 } 00:32:06.022 }, 00:32:06.022 { 00:32:06.022 "method": "ublk_start_disk", 00:32:06.022 "params": { 00:32:06.022 "bdev_name": "malloc0", 00:32:06.022 "ublk_id": 0, 00:32:06.022 "num_queues": 1, 00:32:06.022 "queue_depth": 128 00:32:06.022 } 00:32:06.022 } 00:32:06.022 ] 00:32:06.022 }, 00:32:06.022 { 00:32:06.022 "subsystem": "nbd", 00:32:06.022 "config": [] 00:32:06.022 }, 00:32:06.022 { 00:32:06.022 "subsystem": "nvmf", 00:32:06.022 "config": [ 00:32:06.022 { 00:32:06.022 "method": "nvmf_set_config", 00:32:06.022 "params": { 00:32:06.022 "discovery_filter": "match_any", 00:32:06.022 "admin_cmd_passthru": { 00:32:06.022 "identify_ctrlr": false 00:32:06.022 }, 00:32:06.022 "dhchap_digests": [ 00:32:06.022 "sha256", 00:32:06.022 "sha384", 00:32:06.022 "sha512" 00:32:06.022 ], 00:32:06.022 "dhchap_dhgroups": [ 00:32:06.022 "null", 00:32:06.022 "ffdhe2048", 00:32:06.022 "ffdhe3072", 00:32:06.022 "ffdhe4096", 00:32:06.022 "ffdhe6144", 00:32:06.022 "ffdhe8192" 00:32:06.022 ] 00:32:06.022 } 00:32:06.022 }, 00:32:06.022 { 00:32:06.022 "method": "nvmf_set_max_subsystems", 00:32:06.023 "params": { 00:32:06.023 "max_subsystems": 1024 00:32:06.023 } 00:32:06.023 }, 00:32:06.023 { 00:32:06.023 "method": "nvmf_set_crdt", 00:32:06.023 "params": { 00:32:06.023 "crdt1": 0, 00:32:06.023 "crdt2": 0, 00:32:06.023 "crdt3": 0 00:32:06.023 } 00:32:06.023 } 00:32:06.023 ] 00:32:06.023 }, 00:32:06.023 { 00:32:06.023 "subsystem": "iscsi", 00:32:06.023 "config": [ 00:32:06.023 { 00:32:06.023 "method": "iscsi_set_options", 00:32:06.023 "params": { 00:32:06.023 "node_base": "iqn.2016-06.io.spdk", 00:32:06.023 "max_sessions": 128, 00:32:06.023 "max_connections_per_session": 2, 00:32:06.023 "max_queue_depth": 64, 00:32:06.023 "default_time2wait": 2, 00:32:06.023 "default_time2retain": 20, 00:32:06.023 "first_burst_length": 8192, 00:32:06.023 "immediate_data": true, 00:32:06.023 "allow_duplicated_isid": false, 00:32:06.023 "error_recovery_level": 0, 00:32:06.023 "nop_timeout": 60, 00:32:06.023 "nop_in_interval": 30, 00:32:06.023 "disable_chap": false, 00:32:06.023 "require_chap": false, 00:32:06.023 "mutual_chap": false, 00:32:06.023 "chap_group": 0, 00:32:06.023 "max_large_datain_per_connection": 64, 00:32:06.023 "max_r2t_per_connection": 4, 00:32:06.023 "pdu_pool_size": 36864, 00:32:06.023 "immediate_data_pool_size": 16384, 00:32:06.023 "data_out_pool_size": 2048 00:32:06.023 } 00:32:06.023 } 00:32:06.023 ] 00:32:06.023 } 00:32:06.023 ] 00:32:06.023 }' 00:32:06.023 23:13:44 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 73864 00:32:06.023 23:13:44 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:32:06.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:06.023 23:13:44 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 73864 ']' 00:32:06.023 23:13:44 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:06.023 23:13:44 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:06.023 23:13:44 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:06.023 23:13:44 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:06.023 23:13:44 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:32:06.023 [2024-12-09 23:13:44.301540] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:32:06.023 [2024-12-09 23:13:44.301804] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73864 ] 00:32:06.023 [2024-12-09 23:13:44.458571] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:06.282 [2024-12-09 23:13:44.561524] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:07.216 [2024-12-09 23:13:45.331239] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:32:07.216 [2024-12-09 23:13:45.332037] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:32:07.216 [2024-12-09 23:13:45.339361] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:32:07.216 [2024-12-09 23:13:45.339431] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:32:07.216 [2024-12-09 23:13:45.339440] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:32:07.216 [2024-12-09 23:13:45.339447] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:32:07.216 [2024-12-09 23:13:45.347357] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:32:07.216 [2024-12-09 23:13:45.347443] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:32:07.216 [2024-12-09 23:13:45.355250] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:32:07.216 [2024-12-09 23:13:45.355414] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:32:07.216 [2024-12-09 23:13:45.372238] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:32:07.216 23:13:45 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:07.216 23:13:45 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:32:07.216 23:13:45 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:32:07.216 23:13:45 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:32:07.216 23:13:45 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:07.216 23:13:45 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:32:07.216 23:13:45 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:07.216 23:13:45 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:32:07.216 23:13:45 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:32:07.216 23:13:45 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 73864 00:32:07.217 23:13:45 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 73864 ']' 00:32:07.217 23:13:45 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 73864 00:32:07.217 23:13:45 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:32:07.217 23:13:45 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:07.217 23:13:45 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73864 00:32:07.217 23:13:45 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:07.217 23:13:45 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:07.217 23:13:45 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73864' 00:32:07.217 killing process with pid 73864 00:32:07.217 23:13:45 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 73864 00:32:07.217 23:13:45 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 73864 00:32:08.590 [2024-12-09 23:13:46.615821] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:32:08.590 [2024-12-09 23:13:46.648302] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:32:08.590 [2024-12-09 23:13:46.652366] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:32:08.590 [2024-12-09 23:13:46.662233] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:32:08.590 [2024-12-09 23:13:46.662292] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:32:08.590 [2024-12-09 23:13:46.662301] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:32:08.590 [2024-12-09 23:13:46.662326] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:32:08.590 [2024-12-09 23:13:46.662473] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:32:09.964 23:13:48 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:32:09.964 ************************************ 00:32:09.964 END TEST test_save_ublk_config 00:32:09.964 ************************************ 00:32:09.964 00:32:09.964 real 0m7.636s 00:32:09.964 user 0m5.411s 00:32:09.964 sys 0m2.788s 00:32:09.964 23:13:48 ublk.test_save_ublk_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:09.964 23:13:48 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:32:09.964 23:13:48 ublk -- ublk/ublk.sh@139 -- # spdk_pid=73937 00:32:09.964 23:13:48 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:32:09.964 23:13:48 ublk -- ublk/ublk.sh@141 -- # waitforlisten 73937 00:32:09.964 23:13:48 ublk -- common/autotest_common.sh@835 -- # '[' -z 73937 ']' 00:32:09.964 23:13:48 ublk -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:09.964 23:13:48 ublk -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:09.964 23:13:48 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:32:09.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:09.964 23:13:48 ublk -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:09.964 23:13:48 ublk -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:09.964 23:13:48 ublk -- common/autotest_common.sh@10 -- # set +x 00:32:09.964 [2024-12-09 23:13:48.178424] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:32:09.964 [2024-12-09 23:13:48.178555] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73937 ] 00:32:09.964 [2024-12-09 23:13:48.331676] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:10.239 [2024-12-09 23:13:48.436410] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:10.239 [2024-12-09 23:13:48.436585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:10.812 23:13:49 ublk -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:10.812 23:13:49 ublk -- common/autotest_common.sh@868 -- # return 0 00:32:10.812 23:13:49 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:32:10.812 23:13:49 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:10.812 23:13:49 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:10.812 23:13:49 ublk -- common/autotest_common.sh@10 -- # set +x 00:32:10.812 ************************************ 00:32:10.812 START TEST test_create_ublk 00:32:10.812 ************************************ 00:32:10.812 23:13:49 ublk.test_create_ublk -- common/autotest_common.sh@1129 -- # test_create_ublk 00:32:10.812 23:13:49 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:32:10.812 23:13:49 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:10.812 23:13:49 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:32:10.812 [2024-12-09 23:13:49.058241] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:32:10.812 [2024-12-09 23:13:49.060170] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:32:10.812 23:13:49 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:10.812 23:13:49 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:32:10.812 23:13:49 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:32:10.812 23:13:49 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:10.812 23:13:49 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:32:10.812 23:13:49 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:10.812 23:13:49 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:32:10.812 23:13:49 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:32:10.812 23:13:49 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:10.812 23:13:49 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:32:10.812 [2024-12-09 23:13:49.263378] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:32:10.812 [2024-12-09 23:13:49.263753] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:32:10.812 [2024-12-09 23:13:49.263764] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:32:10.812 [2024-12-09 23:13:49.263772] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:32:10.812 [2024-12-09 23:13:49.271257] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:32:10.812 [2024-12-09 23:13:49.271282] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:32:11.070 [2024-12-09 23:13:49.279244] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:32:11.070 [2024-12-09 23:13:49.279873] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:32:11.070 [2024-12-09 23:13:49.302256] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:32:11.071 23:13:49 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:11.071 23:13:49 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:32:11.071 23:13:49 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:32:11.071 23:13:49 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:32:11.071 23:13:49 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:11.071 23:13:49 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:32:11.071 23:13:49 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:11.071 23:13:49 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:32:11.071 { 00:32:11.071 "ublk_device": "/dev/ublkb0", 00:32:11.071 "id": 0, 00:32:11.071 "queue_depth": 512, 00:32:11.071 "num_queues": 4, 00:32:11.071 "bdev_name": "Malloc0" 00:32:11.071 } 00:32:11.071 ]' 00:32:11.071 23:13:49 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:32:11.071 23:13:49 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:32:11.071 23:13:49 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:32:11.071 23:13:49 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:32:11.071 23:13:49 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:32:11.071 23:13:49 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:32:11.071 23:13:49 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:32:11.071 23:13:49 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:32:11.071 23:13:49 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:32:11.071 23:13:49 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:32:11.071 23:13:49 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:32:11.071 23:13:49 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:32:11.071 23:13:49 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:32:11.071 23:13:49 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:32:11.071 23:13:49 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:32:11.071 23:13:49 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:32:11.071 23:13:49 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:32:11.071 23:13:49 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:32:11.071 23:13:49 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:32:11.071 23:13:49 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:32:11.071 23:13:49 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:32:11.071 23:13:49 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:32:11.329 fio: verification read phase will never start because write phase uses all of runtime 00:32:11.329 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:32:11.329 fio-3.35 00:32:11.329 Starting 1 process 00:32:21.335 00:32:21.335 fio_test: (groupid=0, jobs=1): err= 0: pid=73982: Mon Dec 9 23:13:59 2024 00:32:21.335 write: IOPS=18.6k, BW=72.8MiB/s (76.4MB/s)(728MiB/10001msec); 0 zone resets 00:32:21.335 clat (usec): min=34, max=4084, avg=52.79, stdev=88.79 00:32:21.335 lat (usec): min=34, max=4084, avg=53.27, stdev=88.81 00:32:21.335 clat percentiles (usec): 00:32:21.335 | 1.00th=[ 38], 5.00th=[ 40], 10.00th=[ 42], 20.00th=[ 43], 00:32:21.335 | 30.00th=[ 45], 40.00th=[ 47], 50.00th=[ 48], 60.00th=[ 50], 00:32:21.335 | 70.00th=[ 52], 80.00th=[ 55], 90.00th=[ 59], 95.00th=[ 64], 00:32:21.335 | 99.00th=[ 76], 99.50th=[ 99], 99.90th=[ 1614], 99.95th=[ 2573], 00:32:21.335 | 99.99th=[ 3556] 00:32:21.335 bw ( KiB/s): min=65056, max=85064, per=99.80%, avg=74439.58, stdev=6353.91, samples=19 00:32:21.335 iops : min=16264, max=21266, avg=18609.89, stdev=1588.48, samples=19 00:32:21.335 lat (usec) : 50=61.73%, 100=37.77%, 250=0.25%, 500=0.10%, 750=0.01% 00:32:21.335 lat (usec) : 1000=0.01% 00:32:21.335 lat (msec) : 2=0.06%, 4=0.08%, 10=0.01% 00:32:21.335 cpu : usr=2.97%, sys=13.71%, ctx=186485, majf=0, minf=796 00:32:21.335 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:21.335 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:21.335 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:21.335 issued rwts: total=0,186487,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:21.335 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:21.335 00:32:21.335 Run status group 0 (all jobs): 00:32:21.335 WRITE: bw=72.8MiB/s (76.4MB/s), 72.8MiB/s-72.8MiB/s (76.4MB/s-76.4MB/s), io=728MiB (764MB), run=10001-10001msec 00:32:21.335 00:32:21.335 Disk stats (read/write): 00:32:21.335 ublkb0: ios=0/184395, merge=0/0, ticks=0/8323, in_queue=8324, util=99.08% 00:32:21.335 23:13:59 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:32:21.335 23:13:59 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:21.335 23:13:59 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:32:21.335 [2024-12-09 23:13:59.719639] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:32:21.335 [2024-12-09 23:13:59.759715] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:32:21.335 [2024-12-09 23:13:59.760593] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:32:21.616 [2024-12-09 23:13:59.775322] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:32:21.616 [2024-12-09 23:13:59.775656] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:32:21.616 [2024-12-09 23:13:59.775689] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:32:21.616 23:13:59 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:21.616 23:13:59 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:32:21.616 23:13:59 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # local es=0 00:32:21.616 23:13:59 ublk.test_create_ublk -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:32:21.616 23:13:59 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:32:21.616 23:13:59 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:21.616 23:13:59 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:32:21.616 23:13:59 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:21.616 23:13:59 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # rpc_cmd ublk_stop_disk 0 00:32:21.616 23:13:59 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:21.616 23:13:59 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:32:21.616 [2024-12-09 23:13:59.783308] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:32:21.616 request: 00:32:21.616 { 00:32:21.616 "ublk_id": 0, 00:32:21.616 "method": "ublk_stop_disk", 00:32:21.616 "req_id": 1 00:32:21.616 } 00:32:21.616 Got JSON-RPC error response 00:32:21.616 response: 00:32:21.616 { 00:32:21.616 "code": -19, 00:32:21.616 "message": "No such device" 00:32:21.616 } 00:32:21.616 23:13:59 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:32:21.616 23:13:59 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # es=1 00:32:21.616 23:13:59 ublk.test_create_ublk -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:21.616 23:13:59 ublk.test_create_ublk -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:21.616 23:13:59 ublk.test_create_ublk -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:21.616 23:13:59 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:32:21.616 23:13:59 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:21.616 23:13:59 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:32:21.616 [2024-12-09 23:13:59.799305] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:32:21.616 [2024-12-09 23:13:59.803118] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:32:21.616 [2024-12-09 23:13:59.803150] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:32:21.616 23:13:59 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:21.616 23:13:59 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:32:21.616 23:13:59 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:21.616 23:13:59 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:32:21.876 23:14:00 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:21.876 23:14:00 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:32:21.876 23:14:00 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:32:21.876 23:14:00 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:21.876 23:14:00 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:32:21.876 23:14:00 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:21.876 23:14:00 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:32:21.877 23:14:00 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:32:21.877 23:14:00 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:32:21.877 23:14:00 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:32:21.877 23:14:00 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:21.877 23:14:00 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:32:21.877 23:14:00 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:21.877 23:14:00 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:32:21.877 23:14:00 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:32:21.877 ************************************ 00:32:21.877 END TEST test_create_ublk 00:32:21.877 ************************************ 00:32:21.877 23:14:00 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:32:21.877 00:32:21.877 real 0m11.222s 00:32:21.877 user 0m0.601s 00:32:21.877 sys 0m1.445s 00:32:21.877 23:14:00 ublk.test_create_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:21.877 23:14:00 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:32:21.877 23:14:00 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:32:21.877 23:14:00 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:21.877 23:14:00 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:21.877 23:14:00 ublk -- common/autotest_common.sh@10 -- # set +x 00:32:21.877 ************************************ 00:32:21.877 START TEST test_create_multi_ublk 00:32:21.877 ************************************ 00:32:21.877 23:14:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@1129 -- # test_create_multi_ublk 00:32:21.877 23:14:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:32:21.877 23:14:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:21.877 23:14:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:32:21.877 [2024-12-09 23:14:00.322232] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:32:21.877 [2024-12-09 23:14:00.323852] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:32:21.877 23:14:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:21.877 23:14:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:32:21.877 23:14:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:32:21.877 23:14:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:32:21.877 23:14:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:32:21.877 23:14:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:21.877 23:14:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:32:22.136 23:14:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:22.136 23:14:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:32:22.136 23:14:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:32:22.136 23:14:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:22.136 23:14:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:32:22.136 [2024-12-09 23:14:00.550546] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:32:22.136 [2024-12-09 23:14:00.550859] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:32:22.136 [2024-12-09 23:14:00.550872] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:32:22.136 [2024-12-09 23:14:00.550881] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:32:22.136 [2024-12-09 23:14:00.562444] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:32:22.136 [2024-12-09 23:14:00.562468] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:32:22.136 [2024-12-09 23:14:00.574236] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:32:22.136 [2024-12-09 23:14:00.574744] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:32:22.396 [2024-12-09 23:14:00.614244] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:32:22.396 23:14:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:22.396 23:14:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:32:22.396 23:14:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:32:22.396 23:14:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:32:22.396 23:14:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:22.396 23:14:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:32:22.396 23:14:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:22.396 23:14:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:32:22.396 23:14:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:32:22.396 23:14:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:22.396 23:14:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:32:22.396 [2024-12-09 23:14:00.846344] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:32:22.396 [2024-12-09 23:14:00.846647] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:32:22.396 [2024-12-09 23:14:00.846661] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:32:22.396 [2024-12-09 23:14:00.846666] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:32:22.396 [2024-12-09 23:14:00.854256] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:32:22.396 [2024-12-09 23:14:00.854275] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:32:22.654 [2024-12-09 23:14:00.862245] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:32:22.654 [2024-12-09 23:14:00.862778] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:32:22.654 [2024-12-09 23:14:00.871237] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:32:22.654 23:14:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:22.654 23:14:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:32:22.654 23:14:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:32:22.654 23:14:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:32:22.654 23:14:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:22.654 23:14:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:32:22.654 23:14:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:22.654 23:14:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:32:22.654 23:14:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:32:22.654 23:14:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:22.654 23:14:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:32:22.654 [2024-12-09 23:14:01.038329] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:32:22.654 [2024-12-09 23:14:01.038635] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:32:22.654 [2024-12-09 23:14:01.038643] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:32:22.654 [2024-12-09 23:14:01.038650] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:32:22.654 [2024-12-09 23:14:01.046249] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:32:22.654 [2024-12-09 23:14:01.046271] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:32:22.654 [2024-12-09 23:14:01.054241] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:32:22.654 [2024-12-09 23:14:01.054771] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:32:22.654 [2024-12-09 23:14:01.058357] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:32:22.654 23:14:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:22.654 23:14:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:32:22.654 23:14:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:32:22.654 23:14:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:32:22.654 23:14:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:22.654 23:14:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:32:22.917 23:14:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:22.917 23:14:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:32:22.917 23:14:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:32:22.917 23:14:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:22.917 23:14:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:32:22.917 [2024-12-09 23:14:01.226367] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:32:22.917 [2024-12-09 23:14:01.226672] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:32:22.917 [2024-12-09 23:14:01.226685] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:32:22.917 [2024-12-09 23:14:01.226691] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:32:22.917 [2024-12-09 23:14:01.234460] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:32:22.917 [2024-12-09 23:14:01.234480] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:32:22.917 [2024-12-09 23:14:01.242254] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:32:22.917 [2024-12-09 23:14:01.242762] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:32:22.917 [2024-12-09 23:14:01.255248] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:32:22.917 23:14:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:22.917 23:14:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:32:22.917 23:14:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:32:22.917 23:14:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:22.917 23:14:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:32:22.917 23:14:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:22.917 23:14:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:32:22.917 { 00:32:22.917 "ublk_device": "/dev/ublkb0", 00:32:22.917 "id": 0, 00:32:22.917 "queue_depth": 512, 00:32:22.917 "num_queues": 4, 00:32:22.917 "bdev_name": "Malloc0" 00:32:22.917 }, 00:32:22.917 { 00:32:22.917 "ublk_device": "/dev/ublkb1", 00:32:22.917 "id": 1, 00:32:22.917 "queue_depth": 512, 00:32:22.917 "num_queues": 4, 00:32:22.917 "bdev_name": "Malloc1" 00:32:22.917 }, 00:32:22.917 { 00:32:22.917 "ublk_device": "/dev/ublkb2", 00:32:22.917 "id": 2, 00:32:22.917 "queue_depth": 512, 00:32:22.917 "num_queues": 4, 00:32:22.917 "bdev_name": "Malloc2" 00:32:22.917 }, 00:32:22.917 { 00:32:22.917 "ublk_device": "/dev/ublkb3", 00:32:22.917 "id": 3, 00:32:22.917 "queue_depth": 512, 00:32:22.917 "num_queues": 4, 00:32:22.917 "bdev_name": "Malloc3" 00:32:22.917 } 00:32:22.917 ]' 00:32:22.917 23:14:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:32:22.917 23:14:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:32:22.917 23:14:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:32:22.917 23:14:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:32:22.917 23:14:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:32:22.917 23:14:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:32:22.917 23:14:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:32:22.917 23:14:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:32:22.917 23:14:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:32:23.178 23:14:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:32:23.178 23:14:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:32:23.178 23:14:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:32:23.178 23:14:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:32:23.178 23:14:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:32:23.178 23:14:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:32:23.178 23:14:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:32:23.178 23:14:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:32:23.178 23:14:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:32:23.178 23:14:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:32:23.178 23:14:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:32:23.178 23:14:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:32:23.178 23:14:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:32:23.178 23:14:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:32:23.178 23:14:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:32:23.178 23:14:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:32:23.436 23:14:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:32:23.436 23:14:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:32:23.436 23:14:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:32:23.436 23:14:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:32:23.436 23:14:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:32:23.436 23:14:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:32:23.436 23:14:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:32:23.436 23:14:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:32:23.436 23:14:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:32:23.436 23:14:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:32:23.436 23:14:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:32:23.436 23:14:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:32:23.436 23:14:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:32:23.436 23:14:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:32:23.436 23:14:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:32:23.436 23:14:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:32:23.436 23:14:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:32:23.436 23:14:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:32:23.693 23:14:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:32:23.693 23:14:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:32:23.693 23:14:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:32:23.693 23:14:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:32:23.693 23:14:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:32:23.693 23:14:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:32:23.693 23:14:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:23.693 23:14:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:32:23.693 [2024-12-09 23:14:01.929334] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:32:23.693 [2024-12-09 23:14:01.969283] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:32:23.693 [2024-12-09 23:14:01.970009] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:32:23.693 [2024-12-09 23:14:01.977339] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:32:23.693 [2024-12-09 23:14:01.977584] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:32:23.693 [2024-12-09 23:14:01.977598] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:32:23.693 23:14:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:23.693 23:14:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:32:23.693 23:14:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:32:23.693 23:14:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:23.693 23:14:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:32:23.693 [2024-12-09 23:14:01.993340] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:32:23.693 [2024-12-09 23:14:02.029292] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:32:23.693 [2024-12-09 23:14:02.029965] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:32:23.693 [2024-12-09 23:14:02.036270] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:32:23.693 [2024-12-09 23:14:02.036499] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:32:23.693 [2024-12-09 23:14:02.036514] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:32:23.693 23:14:02 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:23.693 23:14:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:32:23.693 23:14:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:32:23.693 23:14:02 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:23.693 23:14:02 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:32:23.693 [2024-12-09 23:14:02.053326] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:32:23.693 [2024-12-09 23:14:02.094694] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:32:23.693 [2024-12-09 23:14:02.095624] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:32:23.693 [2024-12-09 23:14:02.105244] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:32:23.693 [2024-12-09 23:14:02.105475] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:32:23.693 [2024-12-09 23:14:02.105490] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:32:23.693 23:14:02 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:23.693 23:14:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:32:23.693 23:14:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:32:23.693 23:14:02 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:23.693 23:14:02 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:32:23.693 [2024-12-09 23:14:02.121314] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:32:23.951 [2024-12-09 23:14:02.155696] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:32:23.951 [2024-12-09 23:14:02.156594] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:32:23.951 [2024-12-09 23:14:02.161244] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:32:23.951 [2024-12-09 23:14:02.161465] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:32:23.951 [2024-12-09 23:14:02.161478] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:32:23.951 23:14:02 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:23.951 23:14:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:32:23.951 [2024-12-09 23:14:02.369314] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:32:23.951 [2024-12-09 23:14:02.372931] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:32:23.951 [2024-12-09 23:14:02.372965] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:32:23.951 23:14:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:32:23.951 23:14:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:32:23.951 23:14:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:32:23.951 23:14:02 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:23.951 23:14:02 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:32:24.516 23:14:02 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:24.516 23:14:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:32:24.516 23:14:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:32:24.516 23:14:02 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:24.516 23:14:02 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:32:24.774 23:14:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:24.774 23:14:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:32:24.774 23:14:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:32:24.774 23:14:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:24.774 23:14:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:32:25.032 23:14:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:25.032 23:14:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:32:25.032 23:14:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:32:25.032 23:14:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:25.032 23:14:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:32:25.289 23:14:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:25.289 23:14:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:32:25.289 23:14:03 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:32:25.289 23:14:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:25.289 23:14:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:32:25.289 23:14:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:25.289 23:14:03 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:32:25.289 23:14:03 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:32:25.289 23:14:03 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:32:25.289 23:14:03 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:32:25.289 23:14:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:25.289 23:14:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:32:25.289 23:14:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:25.289 23:14:03 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:32:25.289 23:14:03 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:32:25.289 ************************************ 00:32:25.289 END TEST test_create_multi_ublk 00:32:25.289 ************************************ 00:32:25.289 23:14:03 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:32:25.289 00:32:25.289 real 0m3.282s 00:32:25.289 user 0m0.832s 00:32:25.289 sys 0m0.144s 00:32:25.289 23:14:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:25.289 23:14:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:32:25.289 23:14:03 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:32:25.289 23:14:03 ublk -- ublk/ublk.sh@147 -- # cleanup 00:32:25.289 23:14:03 ublk -- ublk/ublk.sh@130 -- # killprocess 73937 00:32:25.289 23:14:03 ublk -- common/autotest_common.sh@954 -- # '[' -z 73937 ']' 00:32:25.289 23:14:03 ublk -- common/autotest_common.sh@958 -- # kill -0 73937 00:32:25.289 23:14:03 ublk -- common/autotest_common.sh@959 -- # uname 00:32:25.289 23:14:03 ublk -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:25.289 23:14:03 ublk -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73937 00:32:25.289 killing process with pid 73937 00:32:25.289 23:14:03 ublk -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:25.289 23:14:03 ublk -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:25.289 23:14:03 ublk -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73937' 00:32:25.289 23:14:03 ublk -- common/autotest_common.sh@973 -- # kill 73937 00:32:25.289 23:14:03 ublk -- common/autotest_common.sh@978 -- # wait 73937 00:32:25.854 [2024-12-09 23:14:04.171850] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:32:25.854 [2024-12-09 23:14:04.171900] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:32:26.418 00:32:26.418 real 0m24.583s 00:32:26.418 user 0m34.864s 00:32:26.418 sys 0m9.729s 00:32:26.418 23:14:04 ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:26.418 23:14:04 ublk -- common/autotest_common.sh@10 -- # set +x 00:32:26.418 ************************************ 00:32:26.418 END TEST ublk 00:32:26.418 ************************************ 00:32:26.676 23:14:04 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:32:26.676 23:14:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:26.676 23:14:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:26.676 23:14:04 -- common/autotest_common.sh@10 -- # set +x 00:32:26.676 ************************************ 00:32:26.676 START TEST ublk_recovery 00:32:26.676 ************************************ 00:32:26.676 23:14:04 ublk_recovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:32:26.676 * Looking for test storage... 00:32:26.676 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:32:26.676 23:14:04 ublk_recovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:26.676 23:14:04 ublk_recovery -- common/autotest_common.sh@1711 -- # lcov --version 00:32:26.676 23:14:04 ublk_recovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:26.676 23:14:05 ublk_recovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:26.676 23:14:05 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:26.676 23:14:05 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:26.676 23:14:05 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:26.676 23:14:05 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:32:26.676 23:14:05 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:32:26.676 23:14:05 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:32:26.676 23:14:05 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:32:26.676 23:14:05 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:32:26.676 23:14:05 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:32:26.676 23:14:05 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:32:26.676 23:14:05 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:26.676 23:14:05 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:32:26.676 23:14:05 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:32:26.676 23:14:05 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:26.676 23:14:05 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:26.676 23:14:05 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:32:26.677 23:14:05 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:32:26.677 23:14:05 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:26.677 23:14:05 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:32:26.677 23:14:05 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:32:26.677 23:14:05 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:32:26.677 23:14:05 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:32:26.677 23:14:05 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:26.677 23:14:05 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:32:26.677 23:14:05 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:32:26.677 23:14:05 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:26.677 23:14:05 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:26.677 23:14:05 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:32:26.677 23:14:05 ublk_recovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:26.677 23:14:05 ublk_recovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:26.677 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:26.677 --rc genhtml_branch_coverage=1 00:32:26.677 --rc genhtml_function_coverage=1 00:32:26.677 --rc genhtml_legend=1 00:32:26.677 --rc geninfo_all_blocks=1 00:32:26.677 --rc geninfo_unexecuted_blocks=1 00:32:26.677 00:32:26.677 ' 00:32:26.677 23:14:05 ublk_recovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:26.677 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:26.677 --rc genhtml_branch_coverage=1 00:32:26.677 --rc genhtml_function_coverage=1 00:32:26.677 --rc genhtml_legend=1 00:32:26.677 --rc geninfo_all_blocks=1 00:32:26.677 --rc geninfo_unexecuted_blocks=1 00:32:26.677 00:32:26.677 ' 00:32:26.677 23:14:05 ublk_recovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:26.677 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:26.677 --rc genhtml_branch_coverage=1 00:32:26.677 --rc genhtml_function_coverage=1 00:32:26.677 --rc genhtml_legend=1 00:32:26.677 --rc geninfo_all_blocks=1 00:32:26.677 --rc geninfo_unexecuted_blocks=1 00:32:26.677 00:32:26.677 ' 00:32:26.677 23:14:05 ublk_recovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:26.677 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:26.677 --rc genhtml_branch_coverage=1 00:32:26.677 --rc genhtml_function_coverage=1 00:32:26.677 --rc genhtml_legend=1 00:32:26.677 --rc geninfo_all_blocks=1 00:32:26.677 --rc geninfo_unexecuted_blocks=1 00:32:26.677 00:32:26.677 ' 00:32:26.677 23:14:05 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:32:26.677 23:14:05 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:32:26.677 23:14:05 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:32:26.677 23:14:05 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:32:26.677 23:14:05 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:32:26.677 23:14:05 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:32:26.677 23:14:05 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:32:26.677 23:14:05 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:32:26.677 23:14:05 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:32:26.677 23:14:05 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:32:26.677 23:14:05 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=74331 00:32:26.677 23:14:05 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:32:26.677 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:26.677 23:14:05 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 74331 00:32:26.677 23:14:05 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 74331 ']' 00:32:26.677 23:14:05 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:26.677 23:14:05 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:26.677 23:14:05 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:26.677 23:14:05 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:26.677 23:14:05 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:32:26.677 23:14:05 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:32:26.677 [2024-12-09 23:14:05.124659] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:32:26.677 [2024-12-09 23:14:05.124785] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74331 ] 00:32:26.934 [2024-12-09 23:14:05.283810] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:26.934 [2024-12-09 23:14:05.387821] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:26.934 [2024-12-09 23:14:05.387931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:27.866 23:14:05 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:27.866 23:14:05 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:32:27.866 23:14:05 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:32:27.866 23:14:05 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:27.866 23:14:05 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:32:27.866 [2024-12-09 23:14:05.985238] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:32:27.866 [2024-12-09 23:14:05.987107] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:32:27.866 23:14:05 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:27.866 23:14:05 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:32:27.866 23:14:05 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:27.866 23:14:05 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:32:27.866 malloc0 00:32:27.866 23:14:06 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:27.866 23:14:06 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:32:27.866 23:14:06 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:27.866 23:14:06 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:32:27.866 [2024-12-09 23:14:06.089368] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:32:27.866 [2024-12-09 23:14:06.089469] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:32:27.866 [2024-12-09 23:14:06.089480] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:32:27.866 [2024-12-09 23:14:06.089487] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:32:27.866 [2024-12-09 23:14:06.097387] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:32:27.866 [2024-12-09 23:14:06.097410] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:32:27.866 [2024-12-09 23:14:06.105247] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:32:27.866 [2024-12-09 23:14:06.105383] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:32:27.866 [2024-12-09 23:14:06.122254] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:32:27.866 1 00:32:27.866 23:14:06 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:27.866 23:14:06 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:32:28.797 23:14:07 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=74366 00:32:28.797 23:14:07 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:32:28.797 23:14:07 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:32:28.797 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:28.797 fio-3.35 00:32:28.797 Starting 1 process 00:32:34.056 23:14:12 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 74331 00:32:34.056 23:14:12 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:32:39.326 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 74331 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:32:39.326 23:14:17 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=74481 00:32:39.326 23:14:17 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:32:39.326 23:14:17 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:32:39.326 23:14:17 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 74481 00:32:39.326 23:14:17 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 74481 ']' 00:32:39.326 23:14:17 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:39.326 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:39.326 23:14:17 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:39.326 23:14:17 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:39.326 23:14:17 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:39.326 23:14:17 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:32:39.326 [2024-12-09 23:14:17.218509] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:32:39.326 [2024-12-09 23:14:17.219039] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74481 ] 00:32:39.326 [2024-12-09 23:14:17.379750] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:39.326 [2024-12-09 23:14:17.482247] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:39.326 [2024-12-09 23:14:17.482255] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:39.892 23:14:18 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:39.892 23:14:18 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:32:39.892 23:14:18 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:32:39.892 23:14:18 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.892 23:14:18 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:32:39.892 [2024-12-09 23:14:18.079244] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:32:39.892 [2024-12-09 23:14:18.081355] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:32:39.892 23:14:18 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.892 23:14:18 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:32:39.892 23:14:18 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.892 23:14:18 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:32:39.892 malloc0 00:32:39.892 23:14:18 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.892 23:14:18 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:32:39.892 23:14:18 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.892 23:14:18 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:32:39.892 [2024-12-09 23:14:18.184381] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:32:39.892 [2024-12-09 23:14:18.184423] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:32:39.892 [2024-12-09 23:14:18.184436] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:32:39.892 [2024-12-09 23:14:18.192262] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:32:39.892 [2024-12-09 23:14:18.192291] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:32:39.892 1 00:32:39.892 23:14:18 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.892 23:14:18 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 74366 00:32:40.861 [2024-12-09 23:14:19.192333] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:32:40.861 [2024-12-09 23:14:19.198251] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:32:40.861 [2024-12-09 23:14:19.198279] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:32:41.812 [2024-12-09 23:14:20.204258] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:32:41.812 [2024-12-09 23:14:20.213252] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:32:41.812 [2024-12-09 23:14:20.213285] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:32:43.185 [2024-12-09 23:14:21.213321] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:32:43.185 [2024-12-09 23:14:21.219249] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:32:43.185 [2024-12-09 23:14:21.219275] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:32:43.185 [2024-12-09 23:14:21.219289] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:32:43.185 [2024-12-09 23:14:21.219403] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:33:05.233 [2024-12-09 23:14:42.663249] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:33:05.233 [2024-12-09 23:14:42.669738] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:33:05.233 [2024-12-09 23:14:42.677479] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:33:05.233 [2024-12-09 23:14:42.677498] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:33:31.846 00:33:31.846 fio_test: (groupid=0, jobs=1): err= 0: pid=74369: Mon Dec 9 23:15:07 2024 00:33:31.846 read: IOPS=14.4k, BW=56.4MiB/s (59.1MB/s)(3383MiB/60001msec) 00:33:31.846 slat (nsec): min=976, max=295988, avg=5014.06, stdev=1855.15 00:33:31.846 clat (usec): min=939, max=30551k, avg=4606.44, stdev=274609.43 00:33:31.846 lat (usec): min=943, max=30551k, avg=4611.46, stdev=274609.43 00:33:31.846 clat percentiles (usec): 00:33:31.846 | 1.00th=[ 1680], 5.00th=[ 1844], 10.00th=[ 1876], 20.00th=[ 1909], 00:33:31.846 | 30.00th=[ 1942], 40.00th=[ 1958], 50.00th=[ 1975], 60.00th=[ 1991], 00:33:31.846 | 70.00th=[ 2024], 80.00th=[ 2073], 90.00th=[ 2442], 95.00th=[ 3130], 00:33:31.846 | 99.00th=[ 5080], 99.50th=[ 5538], 99.90th=[ 7308], 99.95th=[ 8291], 00:33:31.846 | 99.99th=[13173] 00:33:31.846 bw ( KiB/s): min=21984, max=124992, per=100.00%, avg=115545.36, stdev=17595.07, samples=59 00:33:31.846 iops : min= 5496, max=31248, avg=28886.34, stdev=4398.77, samples=59 00:33:31.846 write: IOPS=14.4k, BW=56.3MiB/s (59.0MB/s)(3378MiB/60001msec); 0 zone resets 00:33:31.846 slat (nsec): min=1096, max=242006, avg=5041.83, stdev=1800.15 00:33:31.847 clat (usec): min=933, max=30551k, avg=4255.65, stdev=250154.65 00:33:31.847 lat (usec): min=938, max=30551k, avg=4260.69, stdev=250154.65 00:33:31.847 clat percentiles (usec): 00:33:31.847 | 1.00th=[ 1713], 5.00th=[ 1926], 10.00th=[ 1958], 20.00th=[ 2008], 00:33:31.847 | 30.00th=[ 2024], 40.00th=[ 2040], 50.00th=[ 2057], 60.00th=[ 2089], 00:33:31.847 | 70.00th=[ 2114], 80.00th=[ 2147], 90.00th=[ 2507], 95.00th=[ 3032], 00:33:31.847 | 99.00th=[ 5080], 99.50th=[ 5604], 99.90th=[ 7373], 99.95th=[ 8291], 00:33:31.847 | 99.99th=[13304] 00:33:31.847 bw ( KiB/s): min=22792, max=125816, per=100.00%, avg=115400.14, stdev=17439.25, samples=59 00:33:31.847 iops : min= 5698, max=31454, avg=28850.03, stdev=4359.81, samples=59 00:33:31.847 lat (usec) : 1000=0.01% 00:33:31.847 lat (msec) : 2=40.33%, 4=56.89%, 10=2.74%, 20=0.03%, >=2000=0.01% 00:33:31.847 cpu : usr=3.39%, sys=14.99%, ctx=60600, majf=0, minf=13 00:33:31.847 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:33:31.847 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:31.847 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:31.847 issued rwts: total=866158,864863,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:31.847 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:31.847 00:33:31.847 Run status group 0 (all jobs): 00:33:31.847 READ: bw=56.4MiB/s (59.1MB/s), 56.4MiB/s-56.4MiB/s (59.1MB/s-59.1MB/s), io=3383MiB (3548MB), run=60001-60001msec 00:33:31.847 WRITE: bw=56.3MiB/s (59.0MB/s), 56.3MiB/s-56.3MiB/s (59.0MB/s-59.0MB/s), io=3378MiB (3542MB), run=60001-60001msec 00:33:31.847 00:33:31.847 Disk stats (read/write): 00:33:31.847 ublkb1: ios=862737/861587, merge=0/0, ticks=3927924/3550471, in_queue=7478395, util=99.91% 00:33:31.847 23:15:07 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:33:31.847 23:15:07 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:31.847 23:15:07 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:33:31.847 [2024-12-09 23:15:07.384099] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:33:31.847 [2024-12-09 23:15:07.423258] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:33:31.847 [2024-12-09 23:15:07.423407] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:33:31.847 [2024-12-09 23:15:07.431242] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:33:31.847 [2024-12-09 23:15:07.431334] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:33:31.847 [2024-12-09 23:15:07.431341] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:33:31.847 23:15:07 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:31.847 23:15:07 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:33:31.847 23:15:07 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:31.847 23:15:07 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:33:31.847 [2024-12-09 23:15:07.447318] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:33:31.847 [2024-12-09 23:15:07.451091] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:33:31.847 [2024-12-09 23:15:07.451125] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:33:31.847 23:15:07 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:31.847 23:15:07 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:33:31.847 23:15:07 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:33:31.847 23:15:07 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 74481 00:33:31.847 23:15:07 ublk_recovery -- common/autotest_common.sh@954 -- # '[' -z 74481 ']' 00:33:31.847 23:15:07 ublk_recovery -- common/autotest_common.sh@958 -- # kill -0 74481 00:33:31.847 23:15:07 ublk_recovery -- common/autotest_common.sh@959 -- # uname 00:33:31.847 23:15:07 ublk_recovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:31.847 23:15:07 ublk_recovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74481 00:33:31.847 killing process with pid 74481 00:33:31.847 23:15:07 ublk_recovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:31.847 23:15:07 ublk_recovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:31.847 23:15:07 ublk_recovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74481' 00:33:31.847 23:15:07 ublk_recovery -- common/autotest_common.sh@973 -- # kill 74481 00:33:31.847 23:15:07 ublk_recovery -- common/autotest_common.sh@978 -- # wait 74481 00:33:31.847 [2024-12-09 23:15:08.526176] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:33:31.847 [2024-12-09 23:15:08.526240] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:33:31.847 00:33:31.847 real 1m4.354s 00:33:31.847 user 1m47.014s 00:33:31.847 sys 0m22.103s 00:33:31.847 23:15:09 ublk_recovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:31.847 ************************************ 00:33:31.847 END TEST ublk_recovery 00:33:31.847 ************************************ 00:33:31.847 23:15:09 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:33:31.847 23:15:09 -- spdk/autotest.sh@251 -- # [[ 0 -eq 1 ]] 00:33:31.847 23:15:09 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:33:31.847 23:15:09 -- spdk/autotest.sh@260 -- # timing_exit lib 00:33:31.847 23:15:09 -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:31.847 23:15:09 -- common/autotest_common.sh@10 -- # set +x 00:33:31.847 23:15:09 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:33:31.847 23:15:09 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:33:31.847 23:15:09 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:33:31.847 23:15:09 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:33:31.847 23:15:09 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:33:31.847 23:15:09 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:33:31.847 23:15:09 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:33:31.847 23:15:09 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:33:31.847 23:15:09 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:33:31.847 23:15:09 -- spdk/autotest.sh@342 -- # '[' 1 -eq 1 ']' 00:33:31.847 23:15:09 -- spdk/autotest.sh@343 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:33:31.847 23:15:09 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:31.847 23:15:09 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:31.847 23:15:09 -- common/autotest_common.sh@10 -- # set +x 00:33:31.847 ************************************ 00:33:31.847 START TEST ftl 00:33:31.847 ************************************ 00:33:31.847 23:15:09 ftl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:33:31.847 * Looking for test storage... 00:33:31.847 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:33:31.847 23:15:09 ftl -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:31.847 23:15:09 ftl -- common/autotest_common.sh@1711 -- # lcov --version 00:33:31.847 23:15:09 ftl -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:31.847 23:15:09 ftl -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:31.847 23:15:09 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:31.847 23:15:09 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:31.847 23:15:09 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:31.847 23:15:09 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:33:31.847 23:15:09 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:33:31.847 23:15:09 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:33:31.847 23:15:09 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:33:31.847 23:15:09 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:33:31.847 23:15:09 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:33:31.847 23:15:09 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:33:31.847 23:15:09 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:31.847 23:15:09 ftl -- scripts/common.sh@344 -- # case "$op" in 00:33:31.847 23:15:09 ftl -- scripts/common.sh@345 -- # : 1 00:33:31.847 23:15:09 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:31.847 23:15:09 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:31.847 23:15:09 ftl -- scripts/common.sh@365 -- # decimal 1 00:33:31.847 23:15:09 ftl -- scripts/common.sh@353 -- # local d=1 00:33:31.847 23:15:09 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:31.847 23:15:09 ftl -- scripts/common.sh@355 -- # echo 1 00:33:31.847 23:15:09 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:33:31.847 23:15:09 ftl -- scripts/common.sh@366 -- # decimal 2 00:33:31.847 23:15:09 ftl -- scripts/common.sh@353 -- # local d=2 00:33:31.847 23:15:09 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:31.847 23:15:09 ftl -- scripts/common.sh@355 -- # echo 2 00:33:31.847 23:15:09 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:33:31.847 23:15:09 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:31.847 23:15:09 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:31.847 23:15:09 ftl -- scripts/common.sh@368 -- # return 0 00:33:31.847 23:15:09 ftl -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:31.847 23:15:09 ftl -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:31.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:31.847 --rc genhtml_branch_coverage=1 00:33:31.847 --rc genhtml_function_coverage=1 00:33:31.847 --rc genhtml_legend=1 00:33:31.847 --rc geninfo_all_blocks=1 00:33:31.847 --rc geninfo_unexecuted_blocks=1 00:33:31.847 00:33:31.847 ' 00:33:31.847 23:15:09 ftl -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:31.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:31.847 --rc genhtml_branch_coverage=1 00:33:31.847 --rc genhtml_function_coverage=1 00:33:31.847 --rc genhtml_legend=1 00:33:31.847 --rc geninfo_all_blocks=1 00:33:31.847 --rc geninfo_unexecuted_blocks=1 00:33:31.847 00:33:31.847 ' 00:33:31.847 23:15:09 ftl -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:31.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:31.847 --rc genhtml_branch_coverage=1 00:33:31.847 --rc genhtml_function_coverage=1 00:33:31.847 --rc genhtml_legend=1 00:33:31.847 --rc geninfo_all_blocks=1 00:33:31.847 --rc geninfo_unexecuted_blocks=1 00:33:31.847 00:33:31.847 ' 00:33:31.847 23:15:09 ftl -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:31.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:31.847 --rc genhtml_branch_coverage=1 00:33:31.847 --rc genhtml_function_coverage=1 00:33:31.847 --rc genhtml_legend=1 00:33:31.847 --rc geninfo_all_blocks=1 00:33:31.847 --rc geninfo_unexecuted_blocks=1 00:33:31.847 00:33:31.847 ' 00:33:31.847 23:15:09 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:33:31.847 23:15:09 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:33:31.847 23:15:09 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:33:31.847 23:15:09 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:33:31.848 23:15:09 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:33:31.848 23:15:09 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:33:31.848 23:15:09 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:31.848 23:15:09 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:33:31.848 23:15:09 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:33:31.848 23:15:09 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:33:31.848 23:15:09 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:33:31.848 23:15:09 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:33:31.848 23:15:09 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:33:31.848 23:15:09 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:33:31.848 23:15:09 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:33:31.848 23:15:09 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:33:31.848 23:15:09 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:33:31.848 23:15:09 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:33:31.848 23:15:09 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:33:31.848 23:15:09 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:33:31.848 23:15:09 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:33:31.848 23:15:09 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:33:31.848 23:15:09 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:33:31.848 23:15:09 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:33:31.848 23:15:09 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:33:31.848 23:15:09 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:33:31.848 23:15:09 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:33:31.848 23:15:09 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:33:31.848 23:15:09 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:33:31.848 23:15:09 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:31.848 23:15:09 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:33:31.848 23:15:09 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:33:31.848 23:15:09 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:33:31.848 23:15:09 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:33:31.848 23:15:09 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:33:31.848 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:33:31.848 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:33:31.848 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:33:31.848 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:33:31.848 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:33:31.848 23:15:09 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:33:31.848 23:15:09 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=75282 00:33:31.848 23:15:09 ftl -- ftl/ftl.sh@38 -- # waitforlisten 75282 00:33:31.848 23:15:09 ftl -- common/autotest_common.sh@835 -- # '[' -z 75282 ']' 00:33:31.848 23:15:09 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:31.848 23:15:09 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:31.848 23:15:09 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:31.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:31.848 23:15:09 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:31.848 23:15:09 ftl -- common/autotest_common.sh@10 -- # set +x 00:33:31.848 [2024-12-09 23:15:10.009329] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:33:31.848 [2024-12-09 23:15:10.009649] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75282 ] 00:33:31.848 [2024-12-09 23:15:10.168453] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:31.848 [2024-12-09 23:15:10.272230] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:32.419 23:15:10 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:32.419 23:15:10 ftl -- common/autotest_common.sh@868 -- # return 0 00:33:32.419 23:15:10 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:33:32.743 23:15:11 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:33:33.687 23:15:11 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:33:33.687 23:15:11 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:33:33.949 23:15:12 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:33:33.949 23:15:12 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:33:33.949 23:15:12 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:33:34.211 23:15:12 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:33:34.211 23:15:12 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:33:34.211 23:15:12 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:33:34.211 23:15:12 ftl -- ftl/ftl.sh@50 -- # break 00:33:34.211 23:15:12 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:33:34.211 23:15:12 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:33:34.211 23:15:12 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:33:34.211 23:15:12 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:33:34.477 23:15:12 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:33:34.477 23:15:12 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:33:34.477 23:15:12 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:33:34.477 23:15:12 ftl -- ftl/ftl.sh@63 -- # break 00:33:34.477 23:15:12 ftl -- ftl/ftl.sh@66 -- # killprocess 75282 00:33:34.477 23:15:12 ftl -- common/autotest_common.sh@954 -- # '[' -z 75282 ']' 00:33:34.477 23:15:12 ftl -- common/autotest_common.sh@958 -- # kill -0 75282 00:33:34.477 23:15:12 ftl -- common/autotest_common.sh@959 -- # uname 00:33:34.477 23:15:12 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:34.477 23:15:12 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75282 00:33:34.477 killing process with pid 75282 00:33:34.477 23:15:12 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:34.477 23:15:12 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:34.477 23:15:12 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75282' 00:33:34.477 23:15:12 ftl -- common/autotest_common.sh@973 -- # kill 75282 00:33:34.477 23:15:12 ftl -- common/autotest_common.sh@978 -- # wait 75282 00:33:36.402 23:15:14 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:33:36.402 23:15:14 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:33:36.402 23:15:14 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:33:36.402 23:15:14 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:36.402 23:15:14 ftl -- common/autotest_common.sh@10 -- # set +x 00:33:36.402 ************************************ 00:33:36.402 START TEST ftl_fio_basic 00:33:36.402 ************************************ 00:33:36.402 23:15:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:33:36.402 * Looking for test storage... 00:33:36.402 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:33:36.402 23:15:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:36.402 23:15:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # lcov --version 00:33:36.402 23:15:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:36.402 23:15:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:36.402 23:15:14 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:36.402 23:15:14 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:36.402 23:15:14 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:36.402 23:15:14 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:33:36.402 23:15:14 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:33:36.402 23:15:14 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:33:36.402 23:15:14 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:33:36.402 23:15:14 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:33:36.402 23:15:14 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:33:36.402 23:15:14 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:33:36.402 23:15:14 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:36.402 23:15:14 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:33:36.402 23:15:14 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:33:36.402 23:15:14 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:36.402 23:15:14 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:36.402 23:15:14 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:33:36.402 23:15:14 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:33:36.402 23:15:14 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:36.402 23:15:14 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:33:36.402 23:15:14 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:33:36.402 23:15:14 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:33:36.402 23:15:14 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:33:36.403 23:15:14 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:36.403 23:15:14 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:33:36.403 23:15:14 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:33:36.403 23:15:14 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:36.403 23:15:14 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:36.403 23:15:14 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:33:36.403 23:15:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:36.403 23:15:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:36.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:36.403 --rc genhtml_branch_coverage=1 00:33:36.403 --rc genhtml_function_coverage=1 00:33:36.403 --rc genhtml_legend=1 00:33:36.403 --rc geninfo_all_blocks=1 00:33:36.403 --rc geninfo_unexecuted_blocks=1 00:33:36.403 00:33:36.403 ' 00:33:36.403 23:15:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:36.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:36.403 --rc genhtml_branch_coverage=1 00:33:36.403 --rc genhtml_function_coverage=1 00:33:36.403 --rc genhtml_legend=1 00:33:36.403 --rc geninfo_all_blocks=1 00:33:36.403 --rc geninfo_unexecuted_blocks=1 00:33:36.403 00:33:36.403 ' 00:33:36.403 23:15:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:36.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:36.403 --rc genhtml_branch_coverage=1 00:33:36.403 --rc genhtml_function_coverage=1 00:33:36.403 --rc genhtml_legend=1 00:33:36.403 --rc geninfo_all_blocks=1 00:33:36.403 --rc geninfo_unexecuted_blocks=1 00:33:36.403 00:33:36.403 ' 00:33:36.403 23:15:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:36.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:36.403 --rc genhtml_branch_coverage=1 00:33:36.403 --rc genhtml_function_coverage=1 00:33:36.403 --rc genhtml_legend=1 00:33:36.403 --rc geninfo_all_blocks=1 00:33:36.403 --rc geninfo_unexecuted_blocks=1 00:33:36.403 00:33:36.403 ' 00:33:36.403 23:15:14 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:33:36.403 23:15:14 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:33:36.403 23:15:14 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:33:36.403 23:15:14 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:33:36.403 23:15:14 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:33:36.403 23:15:14 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:33:36.403 23:15:14 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:36.403 23:15:14 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:33:36.403 23:15:14 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:33:36.403 23:15:14 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:33:36.403 23:15:14 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:33:36.403 23:15:14 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:33:36.403 23:15:14 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:33:36.403 23:15:14 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:33:36.403 23:15:14 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:33:36.403 23:15:14 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:33:36.403 23:15:14 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:33:36.403 23:15:14 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:33:36.403 23:15:14 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:33:36.403 23:15:14 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:33:36.403 23:15:14 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:33:36.403 23:15:14 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:33:36.403 23:15:14 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:33:36.403 23:15:14 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:33:36.403 23:15:14 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:33:36.403 23:15:14 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:33:36.403 23:15:14 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:33:36.403 23:15:14 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:33:36.403 23:15:14 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:33:36.403 23:15:14 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:33:36.403 23:15:14 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:33:36.403 23:15:14 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:33:36.403 23:15:14 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:33:36.403 23:15:14 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:36.403 23:15:14 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:33:36.403 23:15:14 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:33:36.403 23:15:14 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:33:36.403 23:15:14 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:33:36.403 23:15:14 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:33:36.403 23:15:14 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:33:36.403 23:15:14 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:33:36.403 23:15:14 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:33:36.403 23:15:14 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:33:36.403 23:15:14 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:33:36.403 23:15:14 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:33:36.403 23:15:14 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:33:36.403 23:15:14 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=75417 00:33:36.403 23:15:14 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 75417 00:33:36.403 23:15:14 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # '[' -z 75417 ']' 00:33:36.403 23:15:14 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:33:36.403 23:15:14 ftl.ftl_fio_basic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:36.403 23:15:14 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:36.403 23:15:14 ftl.ftl_fio_basic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:36.403 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:36.403 23:15:14 ftl.ftl_fio_basic -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:36.403 23:15:14 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:33:36.403 [2024-12-09 23:15:14.788872] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:33:36.403 [2024-12-09 23:15:14.789204] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75417 ] 00:33:36.664 [2024-12-09 23:15:14.954100] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:36.664 [2024-12-09 23:15:15.101739] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:36.664 [2024-12-09 23:15:15.102151] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:36.664 [2024-12-09 23:15:15.102175] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:37.607 23:15:15 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:37.607 23:15:15 ftl.ftl_fio_basic -- common/autotest_common.sh@868 -- # return 0 00:33:37.607 23:15:15 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:33:37.607 23:15:15 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:33:37.607 23:15:15 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:33:37.607 23:15:15 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:33:37.607 23:15:15 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:33:37.607 23:15:15 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:33:37.868 23:15:16 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:33:37.868 23:15:16 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:33:37.868 23:15:16 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:33:37.868 23:15:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:33:37.868 23:15:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:33:37.868 23:15:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:33:37.868 23:15:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:33:37.868 23:15:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:33:38.203 23:15:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:33:38.203 { 00:33:38.203 "name": "nvme0n1", 00:33:38.203 "aliases": [ 00:33:38.203 "8cf6abc1-1d73-47fe-a294-8d1f6d396fe7" 00:33:38.203 ], 00:33:38.203 "product_name": "NVMe disk", 00:33:38.203 "block_size": 4096, 00:33:38.203 "num_blocks": 1310720, 00:33:38.203 "uuid": "8cf6abc1-1d73-47fe-a294-8d1f6d396fe7", 00:33:38.203 "numa_id": -1, 00:33:38.203 "assigned_rate_limits": { 00:33:38.203 "rw_ios_per_sec": 0, 00:33:38.203 "rw_mbytes_per_sec": 0, 00:33:38.203 "r_mbytes_per_sec": 0, 00:33:38.203 "w_mbytes_per_sec": 0 00:33:38.203 }, 00:33:38.203 "claimed": false, 00:33:38.203 "zoned": false, 00:33:38.203 "supported_io_types": { 00:33:38.203 "read": true, 00:33:38.204 "write": true, 00:33:38.204 "unmap": true, 00:33:38.204 "flush": true, 00:33:38.204 "reset": true, 00:33:38.204 "nvme_admin": true, 00:33:38.204 "nvme_io": true, 00:33:38.204 "nvme_io_md": false, 00:33:38.204 "write_zeroes": true, 00:33:38.204 "zcopy": false, 00:33:38.204 "get_zone_info": false, 00:33:38.204 "zone_management": false, 00:33:38.204 "zone_append": false, 00:33:38.204 "compare": true, 00:33:38.204 "compare_and_write": false, 00:33:38.204 "abort": true, 00:33:38.204 "seek_hole": false, 00:33:38.204 "seek_data": false, 00:33:38.204 "copy": true, 00:33:38.204 "nvme_iov_md": false 00:33:38.204 }, 00:33:38.204 "driver_specific": { 00:33:38.204 "nvme": [ 00:33:38.204 { 00:33:38.204 "pci_address": "0000:00:11.0", 00:33:38.204 "trid": { 00:33:38.204 "trtype": "PCIe", 00:33:38.204 "traddr": "0000:00:11.0" 00:33:38.204 }, 00:33:38.204 "ctrlr_data": { 00:33:38.204 "cntlid": 0, 00:33:38.204 "vendor_id": "0x1b36", 00:33:38.204 "model_number": "QEMU NVMe Ctrl", 00:33:38.204 "serial_number": "12341", 00:33:38.204 "firmware_revision": "8.0.0", 00:33:38.204 "subnqn": "nqn.2019-08.org.qemu:12341", 00:33:38.204 "oacs": { 00:33:38.204 "security": 0, 00:33:38.204 "format": 1, 00:33:38.204 "firmware": 0, 00:33:38.204 "ns_manage": 1 00:33:38.204 }, 00:33:38.204 "multi_ctrlr": false, 00:33:38.204 "ana_reporting": false 00:33:38.204 }, 00:33:38.204 "vs": { 00:33:38.204 "nvme_version": "1.4" 00:33:38.204 }, 00:33:38.204 "ns_data": { 00:33:38.204 "id": 1, 00:33:38.204 "can_share": false 00:33:38.204 } 00:33:38.204 } 00:33:38.204 ], 00:33:38.204 "mp_policy": "active_passive" 00:33:38.204 } 00:33:38.204 } 00:33:38.204 ]' 00:33:38.204 23:15:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:33:38.204 23:15:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:33:38.204 23:15:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:33:38.204 23:15:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=1310720 00:33:38.204 23:15:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:33:38.204 23:15:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 5120 00:33:38.204 23:15:16 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:33:38.204 23:15:16 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:33:38.204 23:15:16 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:33:38.204 23:15:16 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:33:38.204 23:15:16 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:33:38.464 23:15:16 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:33:38.464 23:15:16 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:33:38.464 23:15:16 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=d32983aa-0458-45a7-bd81-ea99be1ee5fa 00:33:38.464 23:15:16 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u d32983aa-0458-45a7-bd81-ea99be1ee5fa 00:33:38.725 23:15:17 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=b65a19cf-4e32-4389-8920-56f2a4ed6f7f 00:33:38.725 23:15:17 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 b65a19cf-4e32-4389-8920-56f2a4ed6f7f 00:33:38.725 23:15:17 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:33:38.725 23:15:17 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:33:38.725 23:15:17 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=b65a19cf-4e32-4389-8920-56f2a4ed6f7f 00:33:38.725 23:15:17 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:33:38.725 23:15:17 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size b65a19cf-4e32-4389-8920-56f2a4ed6f7f 00:33:38.725 23:15:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=b65a19cf-4e32-4389-8920-56f2a4ed6f7f 00:33:38.725 23:15:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:33:38.725 23:15:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:33:38.725 23:15:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:33:38.725 23:15:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b65a19cf-4e32-4389-8920-56f2a4ed6f7f 00:33:38.987 23:15:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:33:38.987 { 00:33:38.987 "name": "b65a19cf-4e32-4389-8920-56f2a4ed6f7f", 00:33:38.987 "aliases": [ 00:33:38.987 "lvs/nvme0n1p0" 00:33:38.987 ], 00:33:38.987 "product_name": "Logical Volume", 00:33:38.987 "block_size": 4096, 00:33:38.987 "num_blocks": 26476544, 00:33:38.987 "uuid": "b65a19cf-4e32-4389-8920-56f2a4ed6f7f", 00:33:38.987 "assigned_rate_limits": { 00:33:38.987 "rw_ios_per_sec": 0, 00:33:38.987 "rw_mbytes_per_sec": 0, 00:33:38.987 "r_mbytes_per_sec": 0, 00:33:38.987 "w_mbytes_per_sec": 0 00:33:38.987 }, 00:33:38.987 "claimed": false, 00:33:38.987 "zoned": false, 00:33:38.987 "supported_io_types": { 00:33:38.987 "read": true, 00:33:38.987 "write": true, 00:33:38.987 "unmap": true, 00:33:38.987 "flush": false, 00:33:38.987 "reset": true, 00:33:38.987 "nvme_admin": false, 00:33:38.987 "nvme_io": false, 00:33:38.987 "nvme_io_md": false, 00:33:38.987 "write_zeroes": true, 00:33:38.987 "zcopy": false, 00:33:38.987 "get_zone_info": false, 00:33:38.987 "zone_management": false, 00:33:38.987 "zone_append": false, 00:33:38.987 "compare": false, 00:33:38.987 "compare_and_write": false, 00:33:38.987 "abort": false, 00:33:38.987 "seek_hole": true, 00:33:38.987 "seek_data": true, 00:33:38.987 "copy": false, 00:33:38.987 "nvme_iov_md": false 00:33:38.987 }, 00:33:38.987 "driver_specific": { 00:33:38.987 "lvol": { 00:33:38.987 "lvol_store_uuid": "d32983aa-0458-45a7-bd81-ea99be1ee5fa", 00:33:38.987 "base_bdev": "nvme0n1", 00:33:38.987 "thin_provision": true, 00:33:38.987 "num_allocated_clusters": 0, 00:33:38.987 "snapshot": false, 00:33:38.987 "clone": false, 00:33:38.987 "esnap_clone": false 00:33:38.987 } 00:33:38.987 } 00:33:38.987 } 00:33:38.987 ]' 00:33:38.987 23:15:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:33:38.987 23:15:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:33:38.987 23:15:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:33:39.248 23:15:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:33:39.248 23:15:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:33:39.248 23:15:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:33:39.248 23:15:17 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:33:39.248 23:15:17 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:33:39.248 23:15:17 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:33:39.509 23:15:17 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:33:39.509 23:15:17 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:33:39.509 23:15:17 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size b65a19cf-4e32-4389-8920-56f2a4ed6f7f 00:33:39.509 23:15:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=b65a19cf-4e32-4389-8920-56f2a4ed6f7f 00:33:39.509 23:15:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:33:39.509 23:15:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:33:39.509 23:15:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:33:39.509 23:15:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b65a19cf-4e32-4389-8920-56f2a4ed6f7f 00:33:39.509 23:15:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:33:39.509 { 00:33:39.509 "name": "b65a19cf-4e32-4389-8920-56f2a4ed6f7f", 00:33:39.509 "aliases": [ 00:33:39.509 "lvs/nvme0n1p0" 00:33:39.509 ], 00:33:39.509 "product_name": "Logical Volume", 00:33:39.509 "block_size": 4096, 00:33:39.509 "num_blocks": 26476544, 00:33:39.509 "uuid": "b65a19cf-4e32-4389-8920-56f2a4ed6f7f", 00:33:39.509 "assigned_rate_limits": { 00:33:39.509 "rw_ios_per_sec": 0, 00:33:39.509 "rw_mbytes_per_sec": 0, 00:33:39.509 "r_mbytes_per_sec": 0, 00:33:39.509 "w_mbytes_per_sec": 0 00:33:39.509 }, 00:33:39.509 "claimed": false, 00:33:39.509 "zoned": false, 00:33:39.509 "supported_io_types": { 00:33:39.509 "read": true, 00:33:39.509 "write": true, 00:33:39.509 "unmap": true, 00:33:39.509 "flush": false, 00:33:39.509 "reset": true, 00:33:39.509 "nvme_admin": false, 00:33:39.509 "nvme_io": false, 00:33:39.509 "nvme_io_md": false, 00:33:39.509 "write_zeroes": true, 00:33:39.509 "zcopy": false, 00:33:39.509 "get_zone_info": false, 00:33:39.509 "zone_management": false, 00:33:39.509 "zone_append": false, 00:33:39.509 "compare": false, 00:33:39.509 "compare_and_write": false, 00:33:39.509 "abort": false, 00:33:39.509 "seek_hole": true, 00:33:39.509 "seek_data": true, 00:33:39.509 "copy": false, 00:33:39.509 "nvme_iov_md": false 00:33:39.509 }, 00:33:39.509 "driver_specific": { 00:33:39.509 "lvol": { 00:33:39.509 "lvol_store_uuid": "d32983aa-0458-45a7-bd81-ea99be1ee5fa", 00:33:39.509 "base_bdev": "nvme0n1", 00:33:39.509 "thin_provision": true, 00:33:39.509 "num_allocated_clusters": 0, 00:33:39.509 "snapshot": false, 00:33:39.509 "clone": false, 00:33:39.509 "esnap_clone": false 00:33:39.509 } 00:33:39.509 } 00:33:39.509 } 00:33:39.509 ]' 00:33:39.509 23:15:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:33:39.771 23:15:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:33:39.771 23:15:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:33:39.771 23:15:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:33:39.771 23:15:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:33:39.771 23:15:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:33:39.771 23:15:18 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:33:39.771 23:15:18 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:33:40.032 23:15:18 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:33:40.032 23:15:18 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:33:40.033 23:15:18 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:33:40.033 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:33:40.033 23:15:18 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size b65a19cf-4e32-4389-8920-56f2a4ed6f7f 00:33:40.033 23:15:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=b65a19cf-4e32-4389-8920-56f2a4ed6f7f 00:33:40.033 23:15:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:33:40.033 23:15:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:33:40.033 23:15:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:33:40.033 23:15:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b65a19cf-4e32-4389-8920-56f2a4ed6f7f 00:33:40.033 23:15:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:33:40.033 { 00:33:40.033 "name": "b65a19cf-4e32-4389-8920-56f2a4ed6f7f", 00:33:40.033 "aliases": [ 00:33:40.033 "lvs/nvme0n1p0" 00:33:40.033 ], 00:33:40.033 "product_name": "Logical Volume", 00:33:40.033 "block_size": 4096, 00:33:40.033 "num_blocks": 26476544, 00:33:40.033 "uuid": "b65a19cf-4e32-4389-8920-56f2a4ed6f7f", 00:33:40.033 "assigned_rate_limits": { 00:33:40.033 "rw_ios_per_sec": 0, 00:33:40.033 "rw_mbytes_per_sec": 0, 00:33:40.033 "r_mbytes_per_sec": 0, 00:33:40.033 "w_mbytes_per_sec": 0 00:33:40.033 }, 00:33:40.033 "claimed": false, 00:33:40.033 "zoned": false, 00:33:40.033 "supported_io_types": { 00:33:40.033 "read": true, 00:33:40.033 "write": true, 00:33:40.033 "unmap": true, 00:33:40.033 "flush": false, 00:33:40.033 "reset": true, 00:33:40.033 "nvme_admin": false, 00:33:40.033 "nvme_io": false, 00:33:40.033 "nvme_io_md": false, 00:33:40.033 "write_zeroes": true, 00:33:40.033 "zcopy": false, 00:33:40.033 "get_zone_info": false, 00:33:40.033 "zone_management": false, 00:33:40.033 "zone_append": false, 00:33:40.033 "compare": false, 00:33:40.033 "compare_and_write": false, 00:33:40.033 "abort": false, 00:33:40.033 "seek_hole": true, 00:33:40.033 "seek_data": true, 00:33:40.033 "copy": false, 00:33:40.033 "nvme_iov_md": false 00:33:40.033 }, 00:33:40.033 "driver_specific": { 00:33:40.033 "lvol": { 00:33:40.033 "lvol_store_uuid": "d32983aa-0458-45a7-bd81-ea99be1ee5fa", 00:33:40.033 "base_bdev": "nvme0n1", 00:33:40.033 "thin_provision": true, 00:33:40.033 "num_allocated_clusters": 0, 00:33:40.033 "snapshot": false, 00:33:40.033 "clone": false, 00:33:40.033 "esnap_clone": false 00:33:40.033 } 00:33:40.033 } 00:33:40.033 } 00:33:40.033 ]' 00:33:40.033 23:15:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:33:40.294 23:15:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:33:40.294 23:15:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:33:40.294 23:15:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:33:40.294 23:15:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:33:40.294 23:15:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:33:40.294 23:15:18 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:33:40.294 23:15:18 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:33:40.294 23:15:18 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d b65a19cf-4e32-4389-8920-56f2a4ed6f7f -c nvc0n1p0 --l2p_dram_limit 60 00:33:40.556 [2024-12-09 23:15:18.771058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:40.556 [2024-12-09 23:15:18.771147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:33:40.556 [2024-12-09 23:15:18.771169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:33:40.556 [2024-12-09 23:15:18.771179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:40.556 [2024-12-09 23:15:18.771283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:40.556 [2024-12-09 23:15:18.771298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:40.556 [2024-12-09 23:15:18.771316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:33:40.556 [2024-12-09 23:15:18.771325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:40.556 [2024-12-09 23:15:18.771371] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:33:40.556 [2024-12-09 23:15:18.772199] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:33:40.556 [2024-12-09 23:15:18.772245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:40.556 [2024-12-09 23:15:18.772256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:40.556 [2024-12-09 23:15:18.772271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.891 ms 00:33:40.556 [2024-12-09 23:15:18.772279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:40.556 [2024-12-09 23:15:18.772335] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 7b19f02f-bc22-48c6-b2e3-e16c6457781a 00:33:40.556 [2024-12-09 23:15:18.774428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:40.556 [2024-12-09 23:15:18.774483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:33:40.556 [2024-12-09 23:15:18.774494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:33:40.556 [2024-12-09 23:15:18.774504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:40.556 [2024-12-09 23:15:18.784722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:40.556 [2024-12-09 23:15:18.784785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:40.556 [2024-12-09 23:15:18.784798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.125 ms 00:33:40.556 [2024-12-09 23:15:18.784815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:40.556 [2024-12-09 23:15:18.785011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:40.556 [2024-12-09 23:15:18.785026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:40.556 [2024-12-09 23:15:18.785036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:33:40.556 [2024-12-09 23:15:18.785052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:40.556 [2024-12-09 23:15:18.785162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:40.556 [2024-12-09 23:15:18.785176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:33:40.556 [2024-12-09 23:15:18.785185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:33:40.556 [2024-12-09 23:15:18.785196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:40.556 [2024-12-09 23:15:18.785261] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:33:40.556 [2024-12-09 23:15:18.789855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:40.556 [2024-12-09 23:15:18.789910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:40.556 [2024-12-09 23:15:18.789931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.599 ms 00:33:40.556 [2024-12-09 23:15:18.789939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:40.556 [2024-12-09 23:15:18.789999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:40.556 [2024-12-09 23:15:18.790009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:33:40.556 [2024-12-09 23:15:18.790021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:33:40.556 [2024-12-09 23:15:18.790031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:40.556 [2024-12-09 23:15:18.790083] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:33:40.556 [2024-12-09 23:15:18.790278] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:33:40.556 [2024-12-09 23:15:18.790298] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:33:40.556 [2024-12-09 23:15:18.790310] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:33:40.556 [2024-12-09 23:15:18.790324] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:33:40.556 [2024-12-09 23:15:18.790334] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:33:40.556 [2024-12-09 23:15:18.790346] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:33:40.556 [2024-12-09 23:15:18.790354] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:33:40.556 [2024-12-09 23:15:18.790364] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:33:40.556 [2024-12-09 23:15:18.790372] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:33:40.556 [2024-12-09 23:15:18.790385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:40.556 [2024-12-09 23:15:18.790393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:33:40.556 [2024-12-09 23:15:18.790403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.305 ms 00:33:40.556 [2024-12-09 23:15:18.790411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:40.556 [2024-12-09 23:15:18.790511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:40.556 [2024-12-09 23:15:18.790522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:33:40.556 [2024-12-09 23:15:18.790531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:33:40.556 [2024-12-09 23:15:18.790539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:40.556 [2024-12-09 23:15:18.790664] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:33:40.556 [2024-12-09 23:15:18.790676] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:33:40.556 [2024-12-09 23:15:18.790687] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:40.556 [2024-12-09 23:15:18.790695] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:40.556 [2024-12-09 23:15:18.790705] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:33:40.556 [2024-12-09 23:15:18.790713] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:33:40.556 [2024-12-09 23:15:18.790722] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:33:40.556 [2024-12-09 23:15:18.790728] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:33:40.556 [2024-12-09 23:15:18.790739] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:33:40.556 [2024-12-09 23:15:18.790745] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:40.556 [2024-12-09 23:15:18.790754] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:33:40.556 [2024-12-09 23:15:18.790760] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:33:40.556 [2024-12-09 23:15:18.790769] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:40.556 [2024-12-09 23:15:18.790776] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:33:40.556 [2024-12-09 23:15:18.790792] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:33:40.556 [2024-12-09 23:15:18.790800] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:40.556 [2024-12-09 23:15:18.790810] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:33:40.556 [2024-12-09 23:15:18.790818] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:33:40.556 [2024-12-09 23:15:18.790826] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:40.556 [2024-12-09 23:15:18.790833] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:33:40.556 [2024-12-09 23:15:18.790842] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:33:40.556 [2024-12-09 23:15:18.790849] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:40.556 [2024-12-09 23:15:18.790857] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:33:40.556 [2024-12-09 23:15:18.790864] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:33:40.556 [2024-12-09 23:15:18.790873] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:40.556 [2024-12-09 23:15:18.790879] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:33:40.556 [2024-12-09 23:15:18.790888] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:33:40.557 [2024-12-09 23:15:18.790895] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:40.557 [2024-12-09 23:15:18.790903] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:33:40.557 [2024-12-09 23:15:18.790910] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:33:40.557 [2024-12-09 23:15:18.790919] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:40.557 [2024-12-09 23:15:18.790925] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:33:40.557 [2024-12-09 23:15:18.790936] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:33:40.557 [2024-12-09 23:15:18.790959] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:40.557 [2024-12-09 23:15:18.790968] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:33:40.557 [2024-12-09 23:15:18.790976] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:33:40.557 [2024-12-09 23:15:18.790984] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:40.557 [2024-12-09 23:15:18.790990] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:33:40.557 [2024-12-09 23:15:18.790999] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:33:40.557 [2024-12-09 23:15:18.791005] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:40.557 [2024-12-09 23:15:18.791013] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:33:40.557 [2024-12-09 23:15:18.791021] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:33:40.557 [2024-12-09 23:15:18.791029] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:40.557 [2024-12-09 23:15:18.791036] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:33:40.557 [2024-12-09 23:15:18.791046] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:33:40.557 [2024-12-09 23:15:18.791053] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:40.557 [2024-12-09 23:15:18.791066] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:40.557 [2024-12-09 23:15:18.791074] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:33:40.557 [2024-12-09 23:15:18.791086] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:33:40.557 [2024-12-09 23:15:18.791093] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:33:40.557 [2024-12-09 23:15:18.791102] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:33:40.557 [2024-12-09 23:15:18.791109] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:33:40.557 [2024-12-09 23:15:18.791118] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:33:40.557 [2024-12-09 23:15:18.791126] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:33:40.557 [2024-12-09 23:15:18.791138] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:40.557 [2024-12-09 23:15:18.791146] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:33:40.557 [2024-12-09 23:15:18.791156] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:33:40.557 [2024-12-09 23:15:18.791163] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:33:40.557 [2024-12-09 23:15:18.791172] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:33:40.557 [2024-12-09 23:15:18.791180] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:33:40.557 [2024-12-09 23:15:18.791189] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:33:40.557 [2024-12-09 23:15:18.791196] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:33:40.557 [2024-12-09 23:15:18.791206] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:33:40.557 [2024-12-09 23:15:18.791213] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:33:40.557 [2024-12-09 23:15:18.791238] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:33:40.557 [2024-12-09 23:15:18.791246] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:33:40.557 [2024-12-09 23:15:18.791255] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:33:40.557 [2024-12-09 23:15:18.791262] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:33:40.557 [2024-12-09 23:15:18.791272] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:33:40.557 [2024-12-09 23:15:18.791280] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:33:40.557 [2024-12-09 23:15:18.791293] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:40.557 [2024-12-09 23:15:18.791301] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:33:40.557 [2024-12-09 23:15:18.791310] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:33:40.557 [2024-12-09 23:15:18.791316] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:33:40.557 [2024-12-09 23:15:18.791326] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:33:40.557 [2024-12-09 23:15:18.791334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:40.557 [2024-12-09 23:15:18.791344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:33:40.557 [2024-12-09 23:15:18.791352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.741 ms 00:33:40.557 [2024-12-09 23:15:18.791365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:40.557 [2024-12-09 23:15:18.791438] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:33:40.557 [2024-12-09 23:15:18.791452] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:33:47.162 [2024-12-09 23:15:24.841136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:47.162 [2024-12-09 23:15:24.841268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:33:47.162 [2024-12-09 23:15:24.841289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6049.661 ms 00:33:47.162 [2024-12-09 23:15:24.841300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:47.162 [2024-12-09 23:15:24.874725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:47.162 [2024-12-09 23:15:24.874839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:47.162 [2024-12-09 23:15:24.874864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.163 ms 00:33:47.162 [2024-12-09 23:15:24.874882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:47.162 [2024-12-09 23:15:24.875120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:47.162 [2024-12-09 23:15:24.875145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:33:47.162 [2024-12-09 23:15:24.875156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:33:47.162 [2024-12-09 23:15:24.875169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:47.162 [2024-12-09 23:15:24.925152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:47.163 [2024-12-09 23:15:24.925247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:47.163 [2024-12-09 23:15:24.925262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.879 ms 00:33:47.163 [2024-12-09 23:15:24.925275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:47.163 [2024-12-09 23:15:24.925340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:47.163 [2024-12-09 23:15:24.925352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:47.163 [2024-12-09 23:15:24.925361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:33:47.163 [2024-12-09 23:15:24.925371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:47.163 [2024-12-09 23:15:24.925979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:47.163 [2024-12-09 23:15:24.926009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:47.163 [2024-12-09 23:15:24.926024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.518 ms 00:33:47.163 [2024-12-09 23:15:24.926034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:47.163 [2024-12-09 23:15:24.926214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:47.163 [2024-12-09 23:15:24.926249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:47.163 [2024-12-09 23:15:24.926259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.145 ms 00:33:47.163 [2024-12-09 23:15:24.926272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:47.163 [2024-12-09 23:15:24.944387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:47.163 [2024-12-09 23:15:24.944440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:47.163 [2024-12-09 23:15:24.944452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.080 ms 00:33:47.163 [2024-12-09 23:15:24.944462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:47.163 [2024-12-09 23:15:24.958082] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:33:47.163 [2024-12-09 23:15:24.978930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:47.163 [2024-12-09 23:15:24.978985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:33:47.163 [2024-12-09 23:15:24.979004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.322 ms 00:33:47.163 [2024-12-09 23:15:24.979013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:47.163 [2024-12-09 23:15:25.073016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:47.163 [2024-12-09 23:15:25.073108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:33:47.163 [2024-12-09 23:15:25.073127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 93.944 ms 00:33:47.163 [2024-12-09 23:15:25.073136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:47.163 [2024-12-09 23:15:25.073386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:47.163 [2024-12-09 23:15:25.073399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:33:47.163 [2024-12-09 23:15:25.073415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.181 ms 00:33:47.163 [2024-12-09 23:15:25.073424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:47.163 [2024-12-09 23:15:25.099752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:47.163 [2024-12-09 23:15:25.099820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:33:47.163 [2024-12-09 23:15:25.099837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.251 ms 00:33:47.163 [2024-12-09 23:15:25.099845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:47.163 [2024-12-09 23:15:25.125298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:47.163 [2024-12-09 23:15:25.125355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:33:47.163 [2024-12-09 23:15:25.125371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.389 ms 00:33:47.163 [2024-12-09 23:15:25.125380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:47.163 [2024-12-09 23:15:25.126017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:47.163 [2024-12-09 23:15:25.126036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:33:47.163 [2024-12-09 23:15:25.126050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.580 ms 00:33:47.163 [2024-12-09 23:15:25.126058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:47.163 [2024-12-09 23:15:25.216377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:47.163 [2024-12-09 23:15:25.216441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:33:47.163 [2024-12-09 23:15:25.216466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 90.256 ms 00:33:47.163 [2024-12-09 23:15:25.216475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:47.163 [2024-12-09 23:15:25.244367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:47.163 [2024-12-09 23:15:25.244433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:33:47.163 [2024-12-09 23:15:25.244451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.780 ms 00:33:47.163 [2024-12-09 23:15:25.244460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:47.163 [2024-12-09 23:15:25.271574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:47.163 [2024-12-09 23:15:25.271642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:33:47.163 [2024-12-09 23:15:25.271659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.048 ms 00:33:47.163 [2024-12-09 23:15:25.271667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:47.163 [2024-12-09 23:15:25.297825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:47.163 [2024-12-09 23:15:25.297882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:33:47.163 [2024-12-09 23:15:25.297900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.092 ms 00:33:47.163 [2024-12-09 23:15:25.297907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:47.163 [2024-12-09 23:15:25.297970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:47.163 [2024-12-09 23:15:25.297981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:33:47.163 [2024-12-09 23:15:25.297999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:33:47.163 [2024-12-09 23:15:25.298007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:47.163 [2024-12-09 23:15:25.298110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:47.163 [2024-12-09 23:15:25.298122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:33:47.163 [2024-12-09 23:15:25.298133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:33:47.163 [2024-12-09 23:15:25.298141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:47.163 [2024-12-09 23:15:25.299464] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 6527.867 ms, result 0 00:33:47.163 { 00:33:47.163 "name": "ftl0", 00:33:47.163 "uuid": "7b19f02f-bc22-48c6-b2e3-e16c6457781a" 00:33:47.163 } 00:33:47.163 23:15:25 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:33:47.163 23:15:25 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:33:47.163 23:15:25 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:33:47.163 23:15:25 ftl.ftl_fio_basic -- common/autotest_common.sh@905 -- # local i 00:33:47.163 23:15:25 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:33:47.163 23:15:25 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:33:47.163 23:15:25 ftl.ftl_fio_basic -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:33:47.163 23:15:25 ftl.ftl_fio_basic -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:33:47.425 [ 00:33:47.425 { 00:33:47.425 "name": "ftl0", 00:33:47.425 "aliases": [ 00:33:47.425 "7b19f02f-bc22-48c6-b2e3-e16c6457781a" 00:33:47.425 ], 00:33:47.425 "product_name": "FTL disk", 00:33:47.425 "block_size": 4096, 00:33:47.425 "num_blocks": 20971520, 00:33:47.425 "uuid": "7b19f02f-bc22-48c6-b2e3-e16c6457781a", 00:33:47.425 "assigned_rate_limits": { 00:33:47.425 "rw_ios_per_sec": 0, 00:33:47.425 "rw_mbytes_per_sec": 0, 00:33:47.425 "r_mbytes_per_sec": 0, 00:33:47.425 "w_mbytes_per_sec": 0 00:33:47.425 }, 00:33:47.425 "claimed": false, 00:33:47.425 "zoned": false, 00:33:47.425 "supported_io_types": { 00:33:47.425 "read": true, 00:33:47.425 "write": true, 00:33:47.425 "unmap": true, 00:33:47.425 "flush": true, 00:33:47.425 "reset": false, 00:33:47.425 "nvme_admin": false, 00:33:47.425 "nvme_io": false, 00:33:47.425 "nvme_io_md": false, 00:33:47.425 "write_zeroes": true, 00:33:47.425 "zcopy": false, 00:33:47.425 "get_zone_info": false, 00:33:47.425 "zone_management": false, 00:33:47.425 "zone_append": false, 00:33:47.425 "compare": false, 00:33:47.425 "compare_and_write": false, 00:33:47.425 "abort": false, 00:33:47.425 "seek_hole": false, 00:33:47.425 "seek_data": false, 00:33:47.425 "copy": false, 00:33:47.425 "nvme_iov_md": false 00:33:47.425 }, 00:33:47.425 "driver_specific": { 00:33:47.425 "ftl": { 00:33:47.425 "base_bdev": "b65a19cf-4e32-4389-8920-56f2a4ed6f7f", 00:33:47.425 "cache": "nvc0n1p0" 00:33:47.425 } 00:33:47.425 } 00:33:47.425 } 00:33:47.425 ] 00:33:47.425 23:15:25 ftl.ftl_fio_basic -- common/autotest_common.sh@911 -- # return 0 00:33:47.425 23:15:25 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:33:47.425 23:15:25 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:33:47.686 23:15:26 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:33:47.686 23:15:26 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:33:47.948 [2024-12-09 23:15:26.236051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:47.949 [2024-12-09 23:15:26.236097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:33:47.949 [2024-12-09 23:15:26.236110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:33:47.949 [2024-12-09 23:15:26.236122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:47.949 [2024-12-09 23:15:26.236154] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:33:47.949 [2024-12-09 23:15:26.238719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:47.949 [2024-12-09 23:15:26.238750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:33:47.949 [2024-12-09 23:15:26.238763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.547 ms 00:33:47.949 [2024-12-09 23:15:26.238772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:47.949 [2024-12-09 23:15:26.239155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:47.949 [2024-12-09 23:15:26.239164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:33:47.949 [2024-12-09 23:15:26.239175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.350 ms 00:33:47.949 [2024-12-09 23:15:26.239182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:47.949 [2024-12-09 23:15:26.242436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:47.949 [2024-12-09 23:15:26.242454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:33:47.949 [2024-12-09 23:15:26.242465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.228 ms 00:33:47.949 [2024-12-09 23:15:26.242474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:47.949 [2024-12-09 23:15:26.248547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:47.949 [2024-12-09 23:15:26.248568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:33:47.949 [2024-12-09 23:15:26.248578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.048 ms 00:33:47.949 [2024-12-09 23:15:26.248586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:47.949 [2024-12-09 23:15:26.271535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:47.949 [2024-12-09 23:15:26.271572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:33:47.949 [2024-12-09 23:15:26.271597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.868 ms 00:33:47.949 [2024-12-09 23:15:26.271605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:47.949 [2024-12-09 23:15:26.286026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:47.949 [2024-12-09 23:15:26.286057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:33:47.949 [2024-12-09 23:15:26.286074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.374 ms 00:33:47.949 [2024-12-09 23:15:26.286083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:47.949 [2024-12-09 23:15:26.286271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:47.949 [2024-12-09 23:15:26.286283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:33:47.949 [2024-12-09 23:15:26.286294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.143 ms 00:33:47.949 [2024-12-09 23:15:26.286301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:47.949 [2024-12-09 23:15:26.309379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:47.949 [2024-12-09 23:15:26.309412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:33:47.949 [2024-12-09 23:15:26.309425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.052 ms 00:33:47.949 [2024-12-09 23:15:26.309434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:47.949 [2024-12-09 23:15:26.331727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:47.949 [2024-12-09 23:15:26.331759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:33:47.949 [2024-12-09 23:15:26.331773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.250 ms 00:33:47.949 [2024-12-09 23:15:26.331781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:47.949 [2024-12-09 23:15:26.353900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:47.949 [2024-12-09 23:15:26.353926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:33:47.949 [2024-12-09 23:15:26.353938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.074 ms 00:33:47.949 [2024-12-09 23:15:26.353945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:47.949 [2024-12-09 23:15:26.375840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:47.949 [2024-12-09 23:15:26.375867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:33:47.949 [2024-12-09 23:15:26.375878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.796 ms 00:33:47.949 [2024-12-09 23:15:26.375886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:47.949 [2024-12-09 23:15:26.375923] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:33:47.949 [2024-12-09 23:15:26.375937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:33:47.949 [2024-12-09 23:15:26.375949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:33:47.949 [2024-12-09 23:15:26.375957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:33:47.949 [2024-12-09 23:15:26.375967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:33:47.949 [2024-12-09 23:15:26.375975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:33:47.949 [2024-12-09 23:15:26.375984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:33:47.949 [2024-12-09 23:15:26.375991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:33:47.949 [2024-12-09 23:15:26.376002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:33:47.949 [2024-12-09 23:15:26.376010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:33:47.949 [2024-12-09 23:15:26.376019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:33:47.949 [2024-12-09 23:15:26.376026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:33:47.949 [2024-12-09 23:15:26.376035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:33:47.949 [2024-12-09 23:15:26.376042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:33:47.949 [2024-12-09 23:15:26.376051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:33:47.949 [2024-12-09 23:15:26.376058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:33:47.949 [2024-12-09 23:15:26.376067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:33:47.949 [2024-12-09 23:15:26.376075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:33:47.949 [2024-12-09 23:15:26.376083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:33:47.949 [2024-12-09 23:15:26.376090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:33:47.949 [2024-12-09 23:15:26.376099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:33:47.949 [2024-12-09 23:15:26.376107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:33:47.949 [2024-12-09 23:15:26.376118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:33:47.949 [2024-12-09 23:15:26.376125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:33:47.949 [2024-12-09 23:15:26.376135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:33:47.949 [2024-12-09 23:15:26.376143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:33:47.949 [2024-12-09 23:15:26.376152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:33:47.950 [2024-12-09 23:15:26.376160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:33:47.950 [2024-12-09 23:15:26.376169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:33:47.950 [2024-12-09 23:15:26.376176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:33:47.950 [2024-12-09 23:15:26.376185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:33:47.950 [2024-12-09 23:15:26.376192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:33:47.950 [2024-12-09 23:15:26.376202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:33:47.950 [2024-12-09 23:15:26.376212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:33:47.950 [2024-12-09 23:15:26.376232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:33:47.950 [2024-12-09 23:15:26.376240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:33:47.950 [2024-12-09 23:15:26.376250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:33:47.950 [2024-12-09 23:15:26.376257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:33:47.950 [2024-12-09 23:15:26.376266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:33:47.950 [2024-12-09 23:15:26.376274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:33:47.950 [2024-12-09 23:15:26.376284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:33:47.950 [2024-12-09 23:15:26.376291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:33:47.950 [2024-12-09 23:15:26.376300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:33:47.950 [2024-12-09 23:15:26.376308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:33:47.950 [2024-12-09 23:15:26.376317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:33:47.950 [2024-12-09 23:15:26.376324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:33:47.950 [2024-12-09 23:15:26.376333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:33:47.950 [2024-12-09 23:15:26.376341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:33:47.950 [2024-12-09 23:15:26.376351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:33:47.950 [2024-12-09 23:15:26.376358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:33:47.950 [2024-12-09 23:15:26.376368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:33:47.950 [2024-12-09 23:15:26.376376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:33:47.950 [2024-12-09 23:15:26.376384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:33:47.950 [2024-12-09 23:15:26.376392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:33:47.950 [2024-12-09 23:15:26.376401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:33:47.950 [2024-12-09 23:15:26.376408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:33:47.950 [2024-12-09 23:15:26.376418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:33:47.950 [2024-12-09 23:15:26.376426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:33:47.950 [2024-12-09 23:15:26.376435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:33:47.950 [2024-12-09 23:15:26.376442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:33:47.950 [2024-12-09 23:15:26.376451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:33:47.950 [2024-12-09 23:15:26.376458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:33:47.950 [2024-12-09 23:15:26.376467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:33:47.950 [2024-12-09 23:15:26.376474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:33:47.950 [2024-12-09 23:15:26.376482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:33:47.950 [2024-12-09 23:15:26.376493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:33:47.950 [2024-12-09 23:15:26.376503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:33:47.950 [2024-12-09 23:15:26.376511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:33:47.950 [2024-12-09 23:15:26.376520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:33:47.950 [2024-12-09 23:15:26.376527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:33:47.950 [2024-12-09 23:15:26.376536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:33:47.950 [2024-12-09 23:15:26.376544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:33:47.950 [2024-12-09 23:15:26.376554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:33:47.950 [2024-12-09 23:15:26.376561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:33:47.950 [2024-12-09 23:15:26.376572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:33:47.950 [2024-12-09 23:15:26.376579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:33:47.950 [2024-12-09 23:15:26.376588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:33:47.950 [2024-12-09 23:15:26.376595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:33:47.950 [2024-12-09 23:15:26.376604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:33:47.950 [2024-12-09 23:15:26.376611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:33:47.950 [2024-12-09 23:15:26.376620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:33:47.950 [2024-12-09 23:15:26.376627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:33:47.950 [2024-12-09 23:15:26.376649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:33:47.950 [2024-12-09 23:15:26.376657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:33:47.950 [2024-12-09 23:15:26.376665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:33:47.950 [2024-12-09 23:15:26.376674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:33:47.950 [2024-12-09 23:15:26.376683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:33:47.950 [2024-12-09 23:15:26.376690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:33:47.950 [2024-12-09 23:15:26.376701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:33:47.950 [2024-12-09 23:15:26.376708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:33:47.950 [2024-12-09 23:15:26.376717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:33:47.950 [2024-12-09 23:15:26.376724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:33:47.950 [2024-12-09 23:15:26.376733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:33:47.950 [2024-12-09 23:15:26.376740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:33:47.950 [2024-12-09 23:15:26.376749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:33:47.951 [2024-12-09 23:15:26.376756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:33:47.951 [2024-12-09 23:15:26.376765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:33:47.951 [2024-12-09 23:15:26.376774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:33:47.951 [2024-12-09 23:15:26.376783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:33:47.951 [2024-12-09 23:15:26.376791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:33:47.951 [2024-12-09 23:15:26.376801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:33:47.951 [2024-12-09 23:15:26.376819] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:33:47.951 [2024-12-09 23:15:26.376827] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7b19f02f-bc22-48c6-b2e3-e16c6457781a 00:33:47.951 [2024-12-09 23:15:26.376835] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:33:47.951 [2024-12-09 23:15:26.376846] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:33:47.951 [2024-12-09 23:15:26.376855] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:33:47.951 [2024-12-09 23:15:26.376864] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:33:47.951 [2024-12-09 23:15:26.376870] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:33:47.951 [2024-12-09 23:15:26.376879] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:33:47.951 [2024-12-09 23:15:26.376886] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:33:47.951 [2024-12-09 23:15:26.376894] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:33:47.951 [2024-12-09 23:15:26.376901] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:33:47.951 [2024-12-09 23:15:26.376909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:47.951 [2024-12-09 23:15:26.376917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:33:47.951 [2024-12-09 23:15:26.376927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.988 ms 00:33:47.951 [2024-12-09 23:15:26.376934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:47.951 [2024-12-09 23:15:26.389380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:47.951 [2024-12-09 23:15:26.389405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:33:47.951 [2024-12-09 23:15:26.389417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.409 ms 00:33:47.951 [2024-12-09 23:15:26.389424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:47.951 [2024-12-09 23:15:26.389766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:47.951 [2024-12-09 23:15:26.389779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:33:47.951 [2024-12-09 23:15:26.389788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.313 ms 00:33:47.951 [2024-12-09 23:15:26.389795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:48.212 [2024-12-09 23:15:26.433027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:48.212 [2024-12-09 23:15:26.433070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:48.212 [2024-12-09 23:15:26.433083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:48.212 [2024-12-09 23:15:26.433093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:48.212 [2024-12-09 23:15:26.433169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:48.212 [2024-12-09 23:15:26.433177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:48.212 [2024-12-09 23:15:26.433187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:48.212 [2024-12-09 23:15:26.433194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:48.212 [2024-12-09 23:15:26.433299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:48.212 [2024-12-09 23:15:26.433309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:48.212 [2024-12-09 23:15:26.433319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:48.212 [2024-12-09 23:15:26.433326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:48.212 [2024-12-09 23:15:26.433355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:48.212 [2024-12-09 23:15:26.433363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:48.212 [2024-12-09 23:15:26.433373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:48.212 [2024-12-09 23:15:26.433380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:48.212 [2024-12-09 23:15:26.513289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:48.212 [2024-12-09 23:15:26.513333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:48.212 [2024-12-09 23:15:26.513345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:48.212 [2024-12-09 23:15:26.513353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:48.212 [2024-12-09 23:15:26.575270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:48.212 [2024-12-09 23:15:26.575314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:48.212 [2024-12-09 23:15:26.575327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:48.212 [2024-12-09 23:15:26.575335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:48.212 [2024-12-09 23:15:26.575419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:48.212 [2024-12-09 23:15:26.575428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:48.212 [2024-12-09 23:15:26.575440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:48.212 [2024-12-09 23:15:26.575447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:48.212 [2024-12-09 23:15:26.575509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:48.212 [2024-12-09 23:15:26.575518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:48.212 [2024-12-09 23:15:26.575527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:48.212 [2024-12-09 23:15:26.575535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:48.212 [2024-12-09 23:15:26.575642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:48.212 [2024-12-09 23:15:26.575653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:48.212 [2024-12-09 23:15:26.575664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:48.212 [2024-12-09 23:15:26.575671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:48.212 [2024-12-09 23:15:26.575718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:48.212 [2024-12-09 23:15:26.575727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:33:48.212 [2024-12-09 23:15:26.575736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:48.212 [2024-12-09 23:15:26.575743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:48.212 [2024-12-09 23:15:26.575781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:48.212 [2024-12-09 23:15:26.575789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:48.212 [2024-12-09 23:15:26.575799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:48.212 [2024-12-09 23:15:26.575808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:48.212 [2024-12-09 23:15:26.575852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:48.212 [2024-12-09 23:15:26.575861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:48.212 [2024-12-09 23:15:26.575870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:48.212 [2024-12-09 23:15:26.575878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:48.212 [2024-12-09 23:15:26.576025] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 339.948 ms, result 0 00:33:48.212 true 00:33:48.212 23:15:26 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 75417 00:33:48.212 23:15:26 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # '[' -z 75417 ']' 00:33:48.212 23:15:26 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # kill -0 75417 00:33:48.212 23:15:26 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # uname 00:33:48.213 23:15:26 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:48.213 23:15:26 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75417 00:33:48.213 killing process with pid 75417 00:33:48.213 23:15:26 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:48.213 23:15:26 ftl.ftl_fio_basic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:48.213 23:15:26 ftl.ftl_fio_basic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75417' 00:33:48.213 23:15:26 ftl.ftl_fio_basic -- common/autotest_common.sh@973 -- # kill 75417 00:33:48.213 23:15:26 ftl.ftl_fio_basic -- common/autotest_common.sh@978 -- # wait 75417 00:33:52.418 23:15:30 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:33:52.418 23:15:30 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:33:52.418 23:15:30 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:33:52.418 23:15:30 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:52.418 23:15:30 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:33:52.418 23:15:30 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:33:52.418 23:15:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:33:52.418 23:15:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:52.418 23:15:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:52.418 23:15:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:52.418 23:15:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:33:52.418 23:15:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:33:52.418 23:15:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:52.418 23:15:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:52.418 23:15:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:33:52.418 23:15:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:52.418 23:15:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:33:52.418 23:15:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:33:52.418 23:15:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:33:52.418 23:15:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:33:52.418 23:15:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:33:52.418 23:15:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:33:52.418 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:33:52.418 fio-3.35 00:33:52.418 Starting 1 thread 00:33:57.710 00:33:57.710 test: (groupid=0, jobs=1): err= 0: pid=75657: Mon Dec 9 23:15:35 2024 00:33:57.710 read: IOPS=1184, BW=78.7MiB/s (82.5MB/s)(255MiB/3236msec) 00:33:57.711 slat (nsec): min=3883, max=22831, avg=4953.02, stdev=2181.72 00:33:57.711 clat (usec): min=231, max=3277, avg=369.96, stdev=119.34 00:33:57.711 lat (usec): min=235, max=3281, avg=374.91, stdev=119.65 00:33:57.711 clat percentiles (usec): 00:33:57.711 | 1.00th=[ 285], 5.00th=[ 293], 10.00th=[ 310], 20.00th=[ 314], 00:33:57.711 | 30.00th=[ 318], 40.00th=[ 318], 50.00th=[ 322], 60.00th=[ 326], 00:33:57.711 | 70.00th=[ 338], 80.00th=[ 424], 90.00th=[ 494], 95.00th=[ 644], 00:33:57.711 | 99.00th=[ 791], 99.50th=[ 848], 99.90th=[ 1057], 99.95th=[ 1172], 00:33:57.711 | 99.99th=[ 3294] 00:33:57.711 write: IOPS=1192, BW=79.2MiB/s (83.1MB/s)(256MiB/3233msec); 0 zone resets 00:33:57.711 slat (nsec): min=17693, max=75532, avg=20718.26, stdev=3656.70 00:33:57.711 clat (usec): min=286, max=69546, avg=432.08, stdev=1124.59 00:33:57.711 lat (usec): min=306, max=69565, avg=452.80, stdev=1124.59 00:33:57.711 clat percentiles (usec): 00:33:57.711 | 1.00th=[ 306], 5.00th=[ 314], 10.00th=[ 334], 20.00th=[ 338], 00:33:57.711 | 30.00th=[ 343], 40.00th=[ 343], 50.00th=[ 347], 60.00th=[ 355], 00:33:57.711 | 70.00th=[ 375], 80.00th=[ 474], 90.00th=[ 668], 95.00th=[ 742], 00:33:57.711 | 99.00th=[ 930], 99.50th=[ 1004], 99.90th=[ 1156], 99.95th=[ 4490], 00:33:57.711 | 99.99th=[69731] 00:33:57.711 bw ( KiB/s): min=54808, max=91664, per=98.91%, avg=80217.33, stdev=14095.38, samples=6 00:33:57.711 iops : min= 806, max= 1348, avg=1179.67, stdev=207.28, samples=6 00:33:57.711 lat (usec) : 250=0.03%, 500=86.66%, 750=10.61%, 1000=2.35% 00:33:57.711 lat (msec) : 2=0.30%, 4=0.03%, 10=0.01%, 100=0.01% 00:33:57.711 cpu : usr=99.20%, sys=0.12%, ctx=6, majf=0, minf=1169 00:33:57.711 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:57.711 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:57.711 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:57.711 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:57.711 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:57.711 00:33:57.711 Run status group 0 (all jobs): 00:33:57.711 READ: bw=78.7MiB/s (82.5MB/s), 78.7MiB/s-78.7MiB/s (82.5MB/s-82.5MB/s), io=255MiB (267MB), run=3236-3236msec 00:33:57.711 WRITE: bw=79.2MiB/s (83.1MB/s), 79.2MiB/s-79.2MiB/s (83.1MB/s-83.1MB/s), io=256MiB (269MB), run=3233-3233msec 00:33:58.652 ----------------------------------------------------- 00:33:58.652 Suppressions used: 00:33:58.652 count bytes template 00:33:58.652 1 5 /usr/src/fio/parse.c 00:33:58.652 1 8 libtcmalloc_minimal.so 00:33:58.652 1 904 libcrypto.so 00:33:58.652 ----------------------------------------------------- 00:33:58.652 00:33:58.652 23:15:37 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:33:58.652 23:15:37 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:58.652 23:15:37 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:33:58.652 23:15:37 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:33:58.652 23:15:37 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:33:58.652 23:15:37 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:58.652 23:15:37 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:33:58.652 23:15:37 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:33:58.652 23:15:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:33:58.652 23:15:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:58.652 23:15:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:58.652 23:15:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:58.652 23:15:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:33:58.652 23:15:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:33:58.652 23:15:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:58.652 23:15:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:58.652 23:15:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:33:58.652 23:15:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:33:58.652 23:15:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:58.652 23:15:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:33:58.652 23:15:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:33:58.652 23:15:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:33:58.652 23:15:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:33:58.652 23:15:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:33:58.913 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:33:58.913 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:33:58.913 fio-3.35 00:33:58.913 Starting 2 threads 00:34:25.506 00:34:25.506 first_half: (groupid=0, jobs=1): err= 0: pid=75749: Mon Dec 9 23:15:59 2024 00:34:25.506 read: IOPS=3053, BW=11.9MiB/s (12.5MB/s)(256MiB/21445msec) 00:34:25.506 slat (nsec): min=3093, max=20074, avg=3888.88, stdev=716.47 00:34:25.506 clat (usec): min=466, max=279832, avg=35645.66, stdev=22844.02 00:34:25.506 lat (usec): min=470, max=279837, avg=35649.55, stdev=22844.06 00:34:25.506 clat percentiles (msec): 00:34:25.506 | 1.00th=[ 8], 5.00th=[ 27], 10.00th=[ 27], 20.00th=[ 29], 00:34:25.506 | 30.00th=[ 30], 40.00th=[ 30], 50.00th=[ 30], 60.00th=[ 31], 00:34:25.506 | 70.00th=[ 34], 80.00th=[ 36], 90.00th=[ 41], 95.00th=[ 74], 00:34:25.506 | 99.00th=[ 150], 99.50th=[ 165], 99.90th=[ 211], 99.95th=[ 247], 00:34:25.506 | 99.99th=[ 275] 00:34:25.506 write: IOPS=3060, BW=12.0MiB/s (12.5MB/s)(256MiB/21416msec); 0 zone resets 00:34:25.506 slat (usec): min=3, max=761, avg= 5.34, stdev= 3.92 00:34:25.506 clat (usec): min=362, max=42108, avg=6246.72, stdev=5706.91 00:34:25.506 lat (usec): min=371, max=42113, avg=6252.06, stdev=5707.08 00:34:25.506 clat percentiles (usec): 00:34:25.506 | 1.00th=[ 701], 5.00th=[ 873], 10.00th=[ 1254], 20.00th=[ 2606], 00:34:25.506 | 30.00th=[ 3687], 40.00th=[ 4621], 50.00th=[ 5145], 60.00th=[ 5669], 00:34:25.506 | 70.00th=[ 6259], 80.00th=[ 7504], 90.00th=[11076], 95.00th=[17957], 00:34:25.506 | 99.00th=[30802], 99.50th=[32637], 99.90th=[35390], 99.95th=[38536], 00:34:25.506 | 99.99th=[41681] 00:34:25.506 bw ( KiB/s): min= 152, max=55472, per=100.00%, avg=26028.15, stdev=17276.52, samples=20 00:34:25.506 iops : min= 38, max=13868, avg=6507.00, stdev=4319.09, samples=20 00:34:25.506 lat (usec) : 500=0.06%, 750=0.94%, 1000=2.47% 00:34:25.506 lat (msec) : 2=4.22%, 4=8.68%, 10=28.54%, 20=4.45%, 50=47.16% 00:34:25.506 lat (msec) : 100=1.82%, 250=1.64%, 500=0.02% 00:34:25.506 cpu : usr=99.35%, sys=0.07%, ctx=33, majf=0, minf=5534 00:34:25.506 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:34:25.506 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:25.506 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:25.506 issued rwts: total=65476,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:25.506 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:25.506 second_half: (groupid=0, jobs=1): err= 0: pid=75750: Mon Dec 9 23:15:59 2024 00:34:25.506 read: IOPS=3079, BW=12.0MiB/s (12.6MB/s)(256MiB/21268msec) 00:34:25.506 slat (nsec): min=3123, max=24468, avg=3820.36, stdev=668.74 00:34:25.506 clat (msec): min=8, max=228, avg=35.93, stdev=20.63 00:34:25.506 lat (msec): min=8, max=228, avg=35.93, stdev=20.63 00:34:25.506 clat percentiles (msec): 00:34:25.506 | 1.00th=[ 26], 5.00th=[ 27], 10.00th=[ 29], 20.00th=[ 29], 00:34:25.506 | 30.00th=[ 30], 40.00th=[ 30], 50.00th=[ 30], 60.00th=[ 31], 00:34:25.506 | 70.00th=[ 34], 80.00th=[ 36], 90.00th=[ 41], 95.00th=[ 66], 00:34:25.506 | 99.00th=[ 148], 99.50th=[ 161], 99.90th=[ 178], 99.95th=[ 182], 00:34:25.506 | 99.99th=[ 190] 00:34:25.506 write: IOPS=3098, BW=12.1MiB/s (12.7MB/s)(256MiB/21153msec); 0 zone resets 00:34:25.506 slat (usec): min=3, max=858, avg= 5.19, stdev= 4.29 00:34:25.506 clat (usec): min=359, max=37927, avg=5622.59, stdev=3239.62 00:34:25.506 lat (usec): min=364, max=37931, avg=5627.78, stdev=3239.91 00:34:25.506 clat percentiles (usec): 00:34:25.506 | 1.00th=[ 1020], 5.00th=[ 2008], 10.00th=[ 2474], 20.00th=[ 3097], 00:34:25.506 | 30.00th=[ 3785], 40.00th=[ 4621], 50.00th=[ 5145], 60.00th=[ 5538], 00:34:25.506 | 70.00th=[ 5997], 80.00th=[ 7439], 90.00th=[10290], 95.00th=[11338], 00:34:25.506 | 99.00th=[15008], 99.50th=[19792], 99.90th=[33424], 99.95th=[36439], 00:34:25.506 | 99.99th=[36963] 00:34:25.506 bw ( KiB/s): min= 1424, max=41632, per=100.00%, avg=27390.32, stdev=14216.22, samples=19 00:34:25.506 iops : min= 356, max=10408, avg=6847.58, stdev=3554.05, samples=19 00:34:25.506 lat (usec) : 500=0.03%, 750=0.16%, 1000=0.28% 00:34:25.506 lat (msec) : 2=1.98%, 4=13.64%, 10=28.04%, 20=5.76%, 50=46.80% 00:34:25.506 lat (msec) : 100=1.73%, 250=1.57% 00:34:25.506 cpu : usr=99.41%, sys=0.15%, ctx=32, majf=0, minf=5581 00:34:25.506 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:34:25.506 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:25.506 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:25.506 issued rwts: total=65490,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:25.506 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:25.506 00:34:25.506 Run status group 0 (all jobs): 00:34:25.506 READ: bw=23.9MiB/s (25.0MB/s), 11.9MiB/s-12.0MiB/s (12.5MB/s-12.6MB/s), io=512MiB (536MB), run=21268-21445msec 00:34:25.506 WRITE: bw=23.9MiB/s (25.1MB/s), 12.0MiB/s-12.1MiB/s (12.5MB/s-12.7MB/s), io=512MiB (537MB), run=21153-21416msec 00:34:25.506 ----------------------------------------------------- 00:34:25.506 Suppressions used: 00:34:25.506 count bytes template 00:34:25.506 2 10 /usr/src/fio/parse.c 00:34:25.506 3 288 /usr/src/fio/iolog.c 00:34:25.506 1 8 libtcmalloc_minimal.so 00:34:25.506 1 904 libcrypto.so 00:34:25.506 ----------------------------------------------------- 00:34:25.506 00:34:25.506 23:16:01 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:34:25.506 23:16:01 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:25.506 23:16:01 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:34:25.506 23:16:01 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:34:25.506 23:16:01 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:34:25.506 23:16:01 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:25.506 23:16:01 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:34:25.506 23:16:01 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:34:25.506 23:16:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:34:25.506 23:16:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:34:25.506 23:16:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:25.506 23:16:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:34:25.506 23:16:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:34:25.506 23:16:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:34:25.506 23:16:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:34:25.506 23:16:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:25.506 23:16:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:34:25.506 23:16:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:34:25.506 23:16:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:25.506 23:16:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:34:25.506 23:16:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:34:25.506 23:16:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:34:25.506 23:16:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:34:25.506 23:16:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:34:25.506 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:34:25.506 fio-3.35 00:34:25.506 Starting 1 thread 00:34:40.438 00:34:40.438 test: (groupid=0, jobs=1): err= 0: pid=76041: Mon Dec 9 23:16:16 2024 00:34:40.438 read: IOPS=7968, BW=31.1MiB/s (32.6MB/s)(255MiB/8182msec) 00:34:40.438 slat (nsec): min=3130, max=28503, avg=3595.30, stdev=695.77 00:34:40.438 clat (usec): min=488, max=33630, avg=16054.55, stdev=1733.82 00:34:40.438 lat (usec): min=492, max=33634, avg=16058.15, stdev=1733.84 00:34:40.438 clat percentiles (usec): 00:34:40.438 | 1.00th=[14615], 5.00th=[14746], 10.00th=[14877], 20.00th=[15008], 00:34:40.438 | 30.00th=[15139], 40.00th=[15401], 50.00th=[15533], 60.00th=[15795], 00:34:40.438 | 70.00th=[16057], 80.00th=[16581], 90.00th=[17957], 95.00th=[19792], 00:34:40.438 | 99.00th=[22676], 99.50th=[23462], 99.90th=[26870], 99.95th=[28443], 00:34:40.438 | 99.99th=[32637] 00:34:40.438 write: IOPS=13.1k, BW=51.1MiB/s (53.6MB/s)(256MiB/5012msec); 0 zone resets 00:34:40.438 slat (usec): min=4, max=301, avg= 6.20, stdev= 2.80 00:34:40.438 clat (usec): min=532, max=64251, avg=9745.98, stdev=12053.56 00:34:40.438 lat (usec): min=546, max=64258, avg=9752.18, stdev=12053.61 00:34:40.438 clat percentiles (usec): 00:34:40.438 | 1.00th=[ 668], 5.00th=[ 824], 10.00th=[ 938], 20.00th=[ 1106], 00:34:40.438 | 30.00th=[ 1401], 40.00th=[ 2474], 50.00th=[ 5866], 60.00th=[ 7504], 00:34:40.438 | 70.00th=[ 9765], 80.00th=[12911], 90.00th=[30016], 95.00th=[37487], 00:34:40.438 | 99.00th=[49546], 99.50th=[56361], 99.90th=[61080], 99.95th=[61604], 00:34:40.438 | 99.99th=[62653] 00:34:40.438 bw ( KiB/s): min= 1016, max=67936, per=91.13%, avg=47662.55, stdev=18007.30, samples=11 00:34:40.438 iops : min= 254, max=16984, avg=11915.64, stdev=4501.83, samples=11 00:34:40.438 lat (usec) : 500=0.01%, 750=1.49%, 1000=5.40% 00:34:40.438 lat (msec) : 2=11.91%, 4=2.28%, 10=14.61%, 20=53.92%, 50=9.91% 00:34:40.438 lat (msec) : 100=0.47% 00:34:40.438 cpu : usr=99.14%, sys=0.12%, ctx=23, majf=0, minf=5565 00:34:40.438 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:34:40.438 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:40.438 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:40.438 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:40.438 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:40.438 00:34:40.438 Run status group 0 (all jobs): 00:34:40.438 READ: bw=31.1MiB/s (32.6MB/s), 31.1MiB/s-31.1MiB/s (32.6MB/s-32.6MB/s), io=255MiB (267MB), run=8182-8182msec 00:34:40.438 WRITE: bw=51.1MiB/s (53.6MB/s), 51.1MiB/s-51.1MiB/s (53.6MB/s-53.6MB/s), io=256MiB (268MB), run=5012-5012msec 00:34:40.438 ----------------------------------------------------- 00:34:40.438 Suppressions used: 00:34:40.438 count bytes template 00:34:40.438 1 5 /usr/src/fio/parse.c 00:34:40.438 2 192 /usr/src/fio/iolog.c 00:34:40.438 1 8 libtcmalloc_minimal.so 00:34:40.438 1 904 libcrypto.so 00:34:40.438 ----------------------------------------------------- 00:34:40.438 00:34:40.438 23:16:17 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:34:40.438 23:16:17 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:40.438 23:16:17 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:34:40.438 23:16:17 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:34:40.438 23:16:17 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:34:40.438 Remove shared memory files 00:34:40.438 23:16:17 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:34:40.438 23:16:17 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:34:40.438 23:16:17 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:34:40.438 23:16:17 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid57133 /dev/shm/spdk_tgt_trace.pid74331 00:34:40.438 23:16:17 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:34:40.438 23:16:17 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:34:40.438 00:34:40.438 real 1m3.158s 00:34:40.438 user 2m20.198s 00:34:40.438 sys 0m2.858s 00:34:40.438 23:16:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:40.438 23:16:17 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:34:40.438 ************************************ 00:34:40.438 END TEST ftl_fio_basic 00:34:40.438 ************************************ 00:34:40.438 23:16:17 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:34:40.438 23:16:17 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:40.438 23:16:17 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:40.438 23:16:17 ftl -- common/autotest_common.sh@10 -- # set +x 00:34:40.438 ************************************ 00:34:40.438 START TEST ftl_bdevperf 00:34:40.438 ************************************ 00:34:40.438 23:16:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:34:40.438 * Looking for test storage... 00:34:40.438 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:34:40.438 23:16:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:40.438 23:16:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:40.439 23:16:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # lcov --version 00:34:40.439 23:16:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:40.439 23:16:17 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:40.439 23:16:17 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:40.439 23:16:17 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:40.439 23:16:17 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:34:40.439 23:16:17 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:34:40.439 23:16:17 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:34:40.439 23:16:17 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:34:40.439 23:16:17 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:34:40.439 23:16:17 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:34:40.439 23:16:17 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:34:40.439 23:16:17 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:40.439 23:16:17 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:34:40.439 23:16:17 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:34:40.439 23:16:17 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:40.439 23:16:17 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:40.439 23:16:17 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:34:40.439 23:16:17 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:34:40.439 23:16:17 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:40.439 23:16:17 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:34:40.439 23:16:17 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:34:40.439 23:16:17 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:34:40.439 23:16:17 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:34:40.439 23:16:17 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:40.439 23:16:17 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:34:40.439 23:16:17 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:34:40.439 23:16:17 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:40.439 23:16:17 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:40.439 23:16:17 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:34:40.439 23:16:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:40.439 23:16:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:40.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:40.439 --rc genhtml_branch_coverage=1 00:34:40.439 --rc genhtml_function_coverage=1 00:34:40.439 --rc genhtml_legend=1 00:34:40.439 --rc geninfo_all_blocks=1 00:34:40.439 --rc geninfo_unexecuted_blocks=1 00:34:40.439 00:34:40.439 ' 00:34:40.439 23:16:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:40.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:40.439 --rc genhtml_branch_coverage=1 00:34:40.439 --rc genhtml_function_coverage=1 00:34:40.439 --rc genhtml_legend=1 00:34:40.439 --rc geninfo_all_blocks=1 00:34:40.439 --rc geninfo_unexecuted_blocks=1 00:34:40.439 00:34:40.439 ' 00:34:40.439 23:16:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:40.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:40.439 --rc genhtml_branch_coverage=1 00:34:40.439 --rc genhtml_function_coverage=1 00:34:40.439 --rc genhtml_legend=1 00:34:40.439 --rc geninfo_all_blocks=1 00:34:40.439 --rc geninfo_unexecuted_blocks=1 00:34:40.439 00:34:40.439 ' 00:34:40.439 23:16:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:40.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:40.439 --rc genhtml_branch_coverage=1 00:34:40.439 --rc genhtml_function_coverage=1 00:34:40.439 --rc genhtml_legend=1 00:34:40.439 --rc geninfo_all_blocks=1 00:34:40.439 --rc geninfo_unexecuted_blocks=1 00:34:40.439 00:34:40.439 ' 00:34:40.439 23:16:17 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:34:40.439 23:16:17 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:34:40.439 23:16:17 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:34:40.439 23:16:17 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:34:40.439 23:16:17 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:34:40.439 23:16:17 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:34:40.439 23:16:17 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:34:40.439 23:16:17 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:34:40.439 23:16:17 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:34:40.439 23:16:17 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:34:40.439 23:16:17 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:34:40.439 23:16:17 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:34:40.439 23:16:17 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:34:40.439 23:16:17 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:34:40.439 23:16:17 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:34:40.439 23:16:17 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:34:40.439 23:16:17 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:34:40.439 23:16:17 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:34:40.439 23:16:17 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:34:40.439 23:16:17 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:34:40.439 23:16:17 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:34:40.439 23:16:17 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:34:40.439 23:16:17 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:34:40.439 23:16:17 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:34:40.439 23:16:17 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:34:40.439 23:16:17 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:34:40.439 23:16:17 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:34:40.439 23:16:17 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:34:40.439 23:16:17 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:34:40.439 23:16:17 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:34:40.439 23:16:17 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:34:40.439 23:16:17 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:34:40.439 23:16:17 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:34:40.439 23:16:17 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:34:40.439 23:16:17 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=76272 00:34:40.439 23:16:17 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:34:40.439 23:16:17 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:34:40.439 23:16:17 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 76272 00:34:40.439 23:16:17 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 76272 ']' 00:34:40.439 23:16:17 ftl.ftl_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:40.439 23:16:17 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:40.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:40.439 23:16:17 ftl.ftl_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:40.439 23:16:17 ftl.ftl_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:40.439 23:16:17 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:40.439 [2024-12-09 23:16:17.945825] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:34:40.439 [2024-12-09 23:16:17.945946] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76272 ] 00:34:40.439 [2024-12-09 23:16:18.097497] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:40.439 [2024-12-09 23:16:18.196903] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:40.439 23:16:18 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:40.439 23:16:18 ftl.ftl_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:34:40.439 23:16:18 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:34:40.439 23:16:18 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:34:40.439 23:16:18 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:34:40.439 23:16:18 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:34:40.439 23:16:18 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:34:40.439 23:16:18 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:34:40.701 23:16:19 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:34:40.701 23:16:19 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:34:40.701 23:16:19 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:34:40.701 23:16:19 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:34:40.701 23:16:19 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:34:40.701 23:16:19 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:34:40.701 23:16:19 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:34:40.701 23:16:19 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:34:40.964 23:16:19 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:34:40.964 { 00:34:40.964 "name": "nvme0n1", 00:34:40.964 "aliases": [ 00:34:40.964 "cc07a1f1-f2f9-457a-aa4f-d79edda3759d" 00:34:40.964 ], 00:34:40.964 "product_name": "NVMe disk", 00:34:40.964 "block_size": 4096, 00:34:40.964 "num_blocks": 1310720, 00:34:40.964 "uuid": "cc07a1f1-f2f9-457a-aa4f-d79edda3759d", 00:34:40.964 "numa_id": -1, 00:34:40.964 "assigned_rate_limits": { 00:34:40.964 "rw_ios_per_sec": 0, 00:34:40.964 "rw_mbytes_per_sec": 0, 00:34:40.964 "r_mbytes_per_sec": 0, 00:34:40.964 "w_mbytes_per_sec": 0 00:34:40.964 }, 00:34:40.964 "claimed": true, 00:34:40.964 "claim_type": "read_many_write_one", 00:34:40.964 "zoned": false, 00:34:40.964 "supported_io_types": { 00:34:40.964 "read": true, 00:34:40.964 "write": true, 00:34:40.964 "unmap": true, 00:34:40.964 "flush": true, 00:34:40.964 "reset": true, 00:34:40.964 "nvme_admin": true, 00:34:40.964 "nvme_io": true, 00:34:40.964 "nvme_io_md": false, 00:34:40.964 "write_zeroes": true, 00:34:40.964 "zcopy": false, 00:34:40.964 "get_zone_info": false, 00:34:40.964 "zone_management": false, 00:34:40.964 "zone_append": false, 00:34:40.964 "compare": true, 00:34:40.964 "compare_and_write": false, 00:34:40.964 "abort": true, 00:34:40.964 "seek_hole": false, 00:34:40.964 "seek_data": false, 00:34:40.964 "copy": true, 00:34:40.964 "nvme_iov_md": false 00:34:40.964 }, 00:34:40.964 "driver_specific": { 00:34:40.964 "nvme": [ 00:34:40.964 { 00:34:40.964 "pci_address": "0000:00:11.0", 00:34:40.964 "trid": { 00:34:40.964 "trtype": "PCIe", 00:34:40.964 "traddr": "0000:00:11.0" 00:34:40.964 }, 00:34:40.964 "ctrlr_data": { 00:34:40.964 "cntlid": 0, 00:34:40.964 "vendor_id": "0x1b36", 00:34:40.964 "model_number": "QEMU NVMe Ctrl", 00:34:40.964 "serial_number": "12341", 00:34:40.964 "firmware_revision": "8.0.0", 00:34:40.964 "subnqn": "nqn.2019-08.org.qemu:12341", 00:34:40.964 "oacs": { 00:34:40.964 "security": 0, 00:34:40.964 "format": 1, 00:34:40.964 "firmware": 0, 00:34:40.964 "ns_manage": 1 00:34:40.964 }, 00:34:40.964 "multi_ctrlr": false, 00:34:40.964 "ana_reporting": false 00:34:40.964 }, 00:34:40.964 "vs": { 00:34:40.964 "nvme_version": "1.4" 00:34:40.964 }, 00:34:40.964 "ns_data": { 00:34:40.964 "id": 1, 00:34:40.964 "can_share": false 00:34:40.964 } 00:34:40.964 } 00:34:40.964 ], 00:34:40.964 "mp_policy": "active_passive" 00:34:40.964 } 00:34:40.964 } 00:34:40.964 ]' 00:34:40.964 23:16:19 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:34:40.964 23:16:19 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:34:40.964 23:16:19 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:34:40.964 23:16:19 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=1310720 00:34:40.964 23:16:19 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:34:40.964 23:16:19 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 5120 00:34:40.964 23:16:19 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:34:40.964 23:16:19 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:34:40.964 23:16:19 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:34:40.964 23:16:19 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:34:40.964 23:16:19 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:34:41.228 23:16:19 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=d32983aa-0458-45a7-bd81-ea99be1ee5fa 00:34:41.228 23:16:19 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:34:41.228 23:16:19 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d32983aa-0458-45a7-bd81-ea99be1ee5fa 00:34:41.490 23:16:19 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:34:41.752 23:16:20 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=b60fa0e2-1b68-4a91-8c43-7a20123d87d2 00:34:41.752 23:16:20 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u b60fa0e2-1b68-4a91-8c43-7a20123d87d2 00:34:42.014 23:16:20 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=a6b31fa8-f142-4a4e-b771-71b55c193dbb 00:34:42.014 23:16:20 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 a6b31fa8-f142-4a4e-b771-71b55c193dbb 00:34:42.014 23:16:20 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:34:42.014 23:16:20 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:34:42.014 23:16:20 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=a6b31fa8-f142-4a4e-b771-71b55c193dbb 00:34:42.014 23:16:20 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:34:42.014 23:16:20 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size a6b31fa8-f142-4a4e-b771-71b55c193dbb 00:34:42.014 23:16:20 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=a6b31fa8-f142-4a4e-b771-71b55c193dbb 00:34:42.014 23:16:20 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:34:42.014 23:16:20 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:34:42.014 23:16:20 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:34:42.014 23:16:20 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a6b31fa8-f142-4a4e-b771-71b55c193dbb 00:34:42.276 23:16:20 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:34:42.276 { 00:34:42.276 "name": "a6b31fa8-f142-4a4e-b771-71b55c193dbb", 00:34:42.276 "aliases": [ 00:34:42.276 "lvs/nvme0n1p0" 00:34:42.276 ], 00:34:42.276 "product_name": "Logical Volume", 00:34:42.276 "block_size": 4096, 00:34:42.277 "num_blocks": 26476544, 00:34:42.277 "uuid": "a6b31fa8-f142-4a4e-b771-71b55c193dbb", 00:34:42.277 "assigned_rate_limits": { 00:34:42.277 "rw_ios_per_sec": 0, 00:34:42.277 "rw_mbytes_per_sec": 0, 00:34:42.277 "r_mbytes_per_sec": 0, 00:34:42.277 "w_mbytes_per_sec": 0 00:34:42.277 }, 00:34:42.277 "claimed": false, 00:34:42.277 "zoned": false, 00:34:42.277 "supported_io_types": { 00:34:42.277 "read": true, 00:34:42.277 "write": true, 00:34:42.277 "unmap": true, 00:34:42.277 "flush": false, 00:34:42.277 "reset": true, 00:34:42.277 "nvme_admin": false, 00:34:42.277 "nvme_io": false, 00:34:42.277 "nvme_io_md": false, 00:34:42.277 "write_zeroes": true, 00:34:42.277 "zcopy": false, 00:34:42.277 "get_zone_info": false, 00:34:42.277 "zone_management": false, 00:34:42.277 "zone_append": false, 00:34:42.277 "compare": false, 00:34:42.277 "compare_and_write": false, 00:34:42.277 "abort": false, 00:34:42.277 "seek_hole": true, 00:34:42.277 "seek_data": true, 00:34:42.277 "copy": false, 00:34:42.277 "nvme_iov_md": false 00:34:42.277 }, 00:34:42.277 "driver_specific": { 00:34:42.277 "lvol": { 00:34:42.277 "lvol_store_uuid": "b60fa0e2-1b68-4a91-8c43-7a20123d87d2", 00:34:42.277 "base_bdev": "nvme0n1", 00:34:42.277 "thin_provision": true, 00:34:42.277 "num_allocated_clusters": 0, 00:34:42.277 "snapshot": false, 00:34:42.277 "clone": false, 00:34:42.277 "esnap_clone": false 00:34:42.277 } 00:34:42.277 } 00:34:42.277 } 00:34:42.277 ]' 00:34:42.277 23:16:20 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:34:42.277 23:16:20 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:34:42.277 23:16:20 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:34:42.277 23:16:20 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:34:42.277 23:16:20 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:34:42.277 23:16:20 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:34:42.277 23:16:20 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:34:42.277 23:16:20 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:34:42.277 23:16:20 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:34:42.535 23:16:20 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:34:42.535 23:16:20 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:34:42.535 23:16:20 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size a6b31fa8-f142-4a4e-b771-71b55c193dbb 00:34:42.535 23:16:20 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=a6b31fa8-f142-4a4e-b771-71b55c193dbb 00:34:42.535 23:16:20 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:34:42.535 23:16:20 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:34:42.535 23:16:20 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:34:42.535 23:16:20 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a6b31fa8-f142-4a4e-b771-71b55c193dbb 00:34:42.793 23:16:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:34:42.793 { 00:34:42.793 "name": "a6b31fa8-f142-4a4e-b771-71b55c193dbb", 00:34:42.793 "aliases": [ 00:34:42.793 "lvs/nvme0n1p0" 00:34:42.793 ], 00:34:42.793 "product_name": "Logical Volume", 00:34:42.793 "block_size": 4096, 00:34:42.793 "num_blocks": 26476544, 00:34:42.793 "uuid": "a6b31fa8-f142-4a4e-b771-71b55c193dbb", 00:34:42.793 "assigned_rate_limits": { 00:34:42.793 "rw_ios_per_sec": 0, 00:34:42.793 "rw_mbytes_per_sec": 0, 00:34:42.793 "r_mbytes_per_sec": 0, 00:34:42.793 "w_mbytes_per_sec": 0 00:34:42.793 }, 00:34:42.793 "claimed": false, 00:34:42.793 "zoned": false, 00:34:42.793 "supported_io_types": { 00:34:42.793 "read": true, 00:34:42.793 "write": true, 00:34:42.793 "unmap": true, 00:34:42.793 "flush": false, 00:34:42.793 "reset": true, 00:34:42.793 "nvme_admin": false, 00:34:42.793 "nvme_io": false, 00:34:42.793 "nvme_io_md": false, 00:34:42.793 "write_zeroes": true, 00:34:42.793 "zcopy": false, 00:34:42.793 "get_zone_info": false, 00:34:42.793 "zone_management": false, 00:34:42.793 "zone_append": false, 00:34:42.793 "compare": false, 00:34:42.793 "compare_and_write": false, 00:34:42.793 "abort": false, 00:34:42.793 "seek_hole": true, 00:34:42.793 "seek_data": true, 00:34:42.793 "copy": false, 00:34:42.793 "nvme_iov_md": false 00:34:42.793 }, 00:34:42.793 "driver_specific": { 00:34:42.793 "lvol": { 00:34:42.793 "lvol_store_uuid": "b60fa0e2-1b68-4a91-8c43-7a20123d87d2", 00:34:42.793 "base_bdev": "nvme0n1", 00:34:42.793 "thin_provision": true, 00:34:42.793 "num_allocated_clusters": 0, 00:34:42.793 "snapshot": false, 00:34:42.793 "clone": false, 00:34:42.793 "esnap_clone": false 00:34:42.793 } 00:34:42.793 } 00:34:42.793 } 00:34:42.793 ]' 00:34:42.793 23:16:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:34:42.793 23:16:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:34:42.793 23:16:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:34:42.793 23:16:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:34:42.793 23:16:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:34:42.793 23:16:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:34:42.793 23:16:21 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:34:42.793 23:16:21 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:34:43.051 23:16:21 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:34:43.051 23:16:21 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size a6b31fa8-f142-4a4e-b771-71b55c193dbb 00:34:43.051 23:16:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=a6b31fa8-f142-4a4e-b771-71b55c193dbb 00:34:43.051 23:16:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:34:43.051 23:16:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:34:43.051 23:16:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:34:43.051 23:16:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a6b31fa8-f142-4a4e-b771-71b55c193dbb 00:34:43.051 23:16:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:34:43.051 { 00:34:43.051 "name": "a6b31fa8-f142-4a4e-b771-71b55c193dbb", 00:34:43.051 "aliases": [ 00:34:43.051 "lvs/nvme0n1p0" 00:34:43.051 ], 00:34:43.051 "product_name": "Logical Volume", 00:34:43.051 "block_size": 4096, 00:34:43.051 "num_blocks": 26476544, 00:34:43.051 "uuid": "a6b31fa8-f142-4a4e-b771-71b55c193dbb", 00:34:43.051 "assigned_rate_limits": { 00:34:43.051 "rw_ios_per_sec": 0, 00:34:43.051 "rw_mbytes_per_sec": 0, 00:34:43.051 "r_mbytes_per_sec": 0, 00:34:43.051 "w_mbytes_per_sec": 0 00:34:43.051 }, 00:34:43.051 "claimed": false, 00:34:43.051 "zoned": false, 00:34:43.051 "supported_io_types": { 00:34:43.051 "read": true, 00:34:43.051 "write": true, 00:34:43.051 "unmap": true, 00:34:43.051 "flush": false, 00:34:43.051 "reset": true, 00:34:43.051 "nvme_admin": false, 00:34:43.051 "nvme_io": false, 00:34:43.051 "nvme_io_md": false, 00:34:43.051 "write_zeroes": true, 00:34:43.051 "zcopy": false, 00:34:43.051 "get_zone_info": false, 00:34:43.051 "zone_management": false, 00:34:43.051 "zone_append": false, 00:34:43.051 "compare": false, 00:34:43.051 "compare_and_write": false, 00:34:43.051 "abort": false, 00:34:43.051 "seek_hole": true, 00:34:43.051 "seek_data": true, 00:34:43.051 "copy": false, 00:34:43.051 "nvme_iov_md": false 00:34:43.051 }, 00:34:43.051 "driver_specific": { 00:34:43.051 "lvol": { 00:34:43.051 "lvol_store_uuid": "b60fa0e2-1b68-4a91-8c43-7a20123d87d2", 00:34:43.051 "base_bdev": "nvme0n1", 00:34:43.051 "thin_provision": true, 00:34:43.051 "num_allocated_clusters": 0, 00:34:43.051 "snapshot": false, 00:34:43.051 "clone": false, 00:34:43.051 "esnap_clone": false 00:34:43.051 } 00:34:43.051 } 00:34:43.051 } 00:34:43.051 ]' 00:34:43.051 23:16:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:34:43.310 23:16:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:34:43.310 23:16:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:34:43.310 23:16:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:34:43.310 23:16:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:34:43.310 23:16:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:34:43.310 23:16:21 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:34:43.310 23:16:21 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d a6b31fa8-f142-4a4e-b771-71b55c193dbb -c nvc0n1p0 --l2p_dram_limit 20 00:34:43.310 [2024-12-09 23:16:21.730372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:43.310 [2024-12-09 23:16:21.730422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:34:43.310 [2024-12-09 23:16:21.730437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:34:43.310 [2024-12-09 23:16:21.730449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:43.310 [2024-12-09 23:16:21.730506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:43.310 [2024-12-09 23:16:21.730519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:34:43.310 [2024-12-09 23:16:21.730528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:34:43.310 [2024-12-09 23:16:21.730538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:43.310 [2024-12-09 23:16:21.730556] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:34:43.310 [2024-12-09 23:16:21.731267] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:34:43.310 [2024-12-09 23:16:21.731290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:43.310 [2024-12-09 23:16:21.731300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:34:43.310 [2024-12-09 23:16:21.731308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.739 ms 00:34:43.310 [2024-12-09 23:16:21.731317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:43.310 [2024-12-09 23:16:21.731375] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID e530c227-a306-46c3-b9a5-c9da92e0f43f 00:34:43.310 [2024-12-09 23:16:21.732385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:43.310 [2024-12-09 23:16:21.732411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:34:43.310 [2024-12-09 23:16:21.732425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:34:43.310 [2024-12-09 23:16:21.732433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:43.310 [2024-12-09 23:16:21.737310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:43.310 [2024-12-09 23:16:21.737339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:34:43.310 [2024-12-09 23:16:21.737350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.839 ms 00:34:43.310 [2024-12-09 23:16:21.737360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:43.310 [2024-12-09 23:16:21.737437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:43.310 [2024-12-09 23:16:21.737446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:34:43.310 [2024-12-09 23:16:21.737462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:34:43.310 [2024-12-09 23:16:21.737469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:43.310 [2024-12-09 23:16:21.737505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:43.310 [2024-12-09 23:16:21.737514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:34:43.310 [2024-12-09 23:16:21.737523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:34:43.310 [2024-12-09 23:16:21.737530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:43.310 [2024-12-09 23:16:21.737552] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:34:43.310 [2024-12-09 23:16:21.741053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:43.310 [2024-12-09 23:16:21.741084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:34:43.310 [2024-12-09 23:16:21.741093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.510 ms 00:34:43.310 [2024-12-09 23:16:21.741105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:43.310 [2024-12-09 23:16:21.741135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:43.310 [2024-12-09 23:16:21.741144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:34:43.310 [2024-12-09 23:16:21.741152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:34:43.310 [2024-12-09 23:16:21.741160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:43.310 [2024-12-09 23:16:21.741187] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:34:43.310 [2024-12-09 23:16:21.741354] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:34:43.310 [2024-12-09 23:16:21.741367] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:34:43.310 [2024-12-09 23:16:21.741379] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:34:43.310 [2024-12-09 23:16:21.741389] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:34:43.310 [2024-12-09 23:16:21.741400] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:34:43.310 [2024-12-09 23:16:21.741408] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:34:43.310 [2024-12-09 23:16:21.741417] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:34:43.310 [2024-12-09 23:16:21.741423] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:34:43.310 [2024-12-09 23:16:21.741432] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:34:43.310 [2024-12-09 23:16:21.741440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:43.310 [2024-12-09 23:16:21.741449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:34:43.310 [2024-12-09 23:16:21.741456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.254 ms 00:34:43.310 [2024-12-09 23:16:21.741465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:43.310 [2024-12-09 23:16:21.741546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:43.310 [2024-12-09 23:16:21.741555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:34:43.310 [2024-12-09 23:16:21.741562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:34:43.311 [2024-12-09 23:16:21.741572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:43.311 [2024-12-09 23:16:21.741671] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:34:43.311 [2024-12-09 23:16:21.741691] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:34:43.311 [2024-12-09 23:16:21.741700] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:34:43.311 [2024-12-09 23:16:21.741709] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:43.311 [2024-12-09 23:16:21.741716] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:34:43.311 [2024-12-09 23:16:21.741725] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:34:43.311 [2024-12-09 23:16:21.741732] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:34:43.311 [2024-12-09 23:16:21.741740] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:34:43.311 [2024-12-09 23:16:21.741747] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:34:43.311 [2024-12-09 23:16:21.741756] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:34:43.311 [2024-12-09 23:16:21.741766] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:34:43.311 [2024-12-09 23:16:21.741780] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:34:43.311 [2024-12-09 23:16:21.741787] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:34:43.311 [2024-12-09 23:16:21.741795] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:34:43.311 [2024-12-09 23:16:21.741802] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:34:43.311 [2024-12-09 23:16:21.741812] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:43.311 [2024-12-09 23:16:21.741818] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:34:43.311 [2024-12-09 23:16:21.741826] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:34:43.311 [2024-12-09 23:16:21.741832] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:43.311 [2024-12-09 23:16:21.741840] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:34:43.311 [2024-12-09 23:16:21.741846] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:34:43.311 [2024-12-09 23:16:21.741854] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:43.311 [2024-12-09 23:16:21.741861] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:34:43.311 [2024-12-09 23:16:21.741868] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:34:43.311 [2024-12-09 23:16:21.741875] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:43.311 [2024-12-09 23:16:21.741882] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:34:43.311 [2024-12-09 23:16:21.741888] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:34:43.311 [2024-12-09 23:16:21.741896] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:43.311 [2024-12-09 23:16:21.741902] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:34:43.311 [2024-12-09 23:16:21.741911] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:34:43.311 [2024-12-09 23:16:21.741918] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:43.311 [2024-12-09 23:16:21.741927] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:34:43.311 [2024-12-09 23:16:21.741934] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:34:43.311 [2024-12-09 23:16:21.741942] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:34:43.311 [2024-12-09 23:16:21.741949] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:34:43.311 [2024-12-09 23:16:21.741957] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:34:43.311 [2024-12-09 23:16:21.741963] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:34:43.311 [2024-12-09 23:16:21.741970] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:34:43.311 [2024-12-09 23:16:21.741977] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:34:43.311 [2024-12-09 23:16:21.741984] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:43.311 [2024-12-09 23:16:21.741991] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:34:43.311 [2024-12-09 23:16:21.741999] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:34:43.311 [2024-12-09 23:16:21.742007] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:43.311 [2024-12-09 23:16:21.742015] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:34:43.311 [2024-12-09 23:16:21.742022] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:34:43.311 [2024-12-09 23:16:21.742031] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:34:43.311 [2024-12-09 23:16:21.742037] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:43.311 [2024-12-09 23:16:21.742048] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:34:43.311 [2024-12-09 23:16:21.742054] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:34:43.311 [2024-12-09 23:16:21.742062] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:34:43.311 [2024-12-09 23:16:21.742069] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:34:43.311 [2024-12-09 23:16:21.742076] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:34:43.311 [2024-12-09 23:16:21.742083] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:34:43.311 [2024-12-09 23:16:21.742092] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:34:43.311 [2024-12-09 23:16:21.742101] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:34:43.311 [2024-12-09 23:16:21.742111] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:34:43.311 [2024-12-09 23:16:21.742119] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:34:43.311 [2024-12-09 23:16:21.742127] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:34:43.311 [2024-12-09 23:16:21.742133] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:34:43.311 [2024-12-09 23:16:21.742142] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:34:43.311 [2024-12-09 23:16:21.742149] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:34:43.311 [2024-12-09 23:16:21.742158] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:34:43.311 [2024-12-09 23:16:21.742165] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:34:43.311 [2024-12-09 23:16:21.742175] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:34:43.311 [2024-12-09 23:16:21.742182] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:34:43.311 [2024-12-09 23:16:21.742190] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:34:43.311 [2024-12-09 23:16:21.742197] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:34:43.311 [2024-12-09 23:16:21.742206] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:34:43.311 [2024-12-09 23:16:21.742213] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:34:43.311 [2024-12-09 23:16:21.742237] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:34:43.311 [2024-12-09 23:16:21.742246] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:34:43.311 [2024-12-09 23:16:21.742257] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:34:43.311 [2024-12-09 23:16:21.742265] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:34:43.311 [2024-12-09 23:16:21.742273] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:34:43.311 [2024-12-09 23:16:21.742282] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:34:43.311 [2024-12-09 23:16:21.742292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:43.311 [2024-12-09 23:16:21.742299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:34:43.311 [2024-12-09 23:16:21.742308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.684 ms 00:34:43.311 [2024-12-09 23:16:21.742315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:43.311 [2024-12-09 23:16:21.742348] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:34:43.311 [2024-12-09 23:16:21.742357] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:34:46.590 [2024-12-09 23:16:24.340782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:46.590 [2024-12-09 23:16:24.340843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:34:46.590 [2024-12-09 23:16:24.340860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2598.410 ms 00:34:46.590 [2024-12-09 23:16:24.340868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:46.590 [2024-12-09 23:16:24.366475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:46.590 [2024-12-09 23:16:24.366516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:34:46.590 [2024-12-09 23:16:24.366530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.401 ms 00:34:46.590 [2024-12-09 23:16:24.366538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:46.590 [2024-12-09 23:16:24.366663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:46.590 [2024-12-09 23:16:24.366674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:34:46.590 [2024-12-09 23:16:24.366686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:34:46.590 [2024-12-09 23:16:24.366693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:46.590 [2024-12-09 23:16:24.416955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:46.590 [2024-12-09 23:16:24.417000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:34:46.590 [2024-12-09 23:16:24.417014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.214 ms 00:34:46.590 [2024-12-09 23:16:24.417022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:46.590 [2024-12-09 23:16:24.417067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:46.590 [2024-12-09 23:16:24.417076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:34:46.590 [2024-12-09 23:16:24.417086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:34:46.590 [2024-12-09 23:16:24.417096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:46.590 [2024-12-09 23:16:24.417483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:46.590 [2024-12-09 23:16:24.417504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:34:46.590 [2024-12-09 23:16:24.417515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.330 ms 00:34:46.590 [2024-12-09 23:16:24.417522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:46.590 [2024-12-09 23:16:24.417637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:46.590 [2024-12-09 23:16:24.417653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:34:46.590 [2024-12-09 23:16:24.417665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:34:46.590 [2024-12-09 23:16:24.417673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:46.590 [2024-12-09 23:16:24.430455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:46.590 [2024-12-09 23:16:24.430486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:34:46.590 [2024-12-09 23:16:24.430497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.764 ms 00:34:46.590 [2024-12-09 23:16:24.430511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:46.590 [2024-12-09 23:16:24.441847] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:34:46.590 [2024-12-09 23:16:24.446743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:46.590 [2024-12-09 23:16:24.446777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:34:46.590 [2024-12-09 23:16:24.446789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.167 ms 00:34:46.590 [2024-12-09 23:16:24.446799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:46.590 [2024-12-09 23:16:24.507541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:46.590 [2024-12-09 23:16:24.507590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:34:46.590 [2024-12-09 23:16:24.507602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 60.719 ms 00:34:46.590 [2024-12-09 23:16:24.507612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:46.590 [2024-12-09 23:16:24.507782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:46.590 [2024-12-09 23:16:24.507797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:34:46.590 [2024-12-09 23:16:24.507805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.137 ms 00:34:46.590 [2024-12-09 23:16:24.507817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:46.590 [2024-12-09 23:16:24.531173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:46.590 [2024-12-09 23:16:24.531213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:34:46.590 [2024-12-09 23:16:24.531231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.317 ms 00:34:46.590 [2024-12-09 23:16:24.531241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:46.590 [2024-12-09 23:16:24.553668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:46.590 [2024-12-09 23:16:24.553704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:34:46.590 [2024-12-09 23:16:24.553716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.395 ms 00:34:46.591 [2024-12-09 23:16:24.553726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:46.591 [2024-12-09 23:16:24.554299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:46.591 [2024-12-09 23:16:24.554323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:34:46.591 [2024-12-09 23:16:24.554331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.543 ms 00:34:46.591 [2024-12-09 23:16:24.554340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:46.591 [2024-12-09 23:16:24.628622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:46.591 [2024-12-09 23:16:24.628678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:34:46.591 [2024-12-09 23:16:24.628691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 74.251 ms 00:34:46.591 [2024-12-09 23:16:24.628702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:46.591 [2024-12-09 23:16:24.652680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:46.591 [2024-12-09 23:16:24.652725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:34:46.591 [2024-12-09 23:16:24.652740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.905 ms 00:34:46.591 [2024-12-09 23:16:24.652750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:46.591 [2024-12-09 23:16:24.675607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:46.591 [2024-12-09 23:16:24.675644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:34:46.591 [2024-12-09 23:16:24.675655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.823 ms 00:34:46.591 [2024-12-09 23:16:24.675664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:46.591 [2024-12-09 23:16:24.698600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:46.591 [2024-12-09 23:16:24.698637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:34:46.591 [2024-12-09 23:16:24.698648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.904 ms 00:34:46.591 [2024-12-09 23:16:24.698658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:46.591 [2024-12-09 23:16:24.698692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:46.591 [2024-12-09 23:16:24.698705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:34:46.591 [2024-12-09 23:16:24.698714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:34:46.591 [2024-12-09 23:16:24.698723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:46.591 [2024-12-09 23:16:24.698795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:46.591 [2024-12-09 23:16:24.698807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:34:46.591 [2024-12-09 23:16:24.698815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:34:46.591 [2024-12-09 23:16:24.698823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:46.591 [2024-12-09 23:16:24.699656] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2968.889 ms, result 0 00:34:46.591 { 00:34:46.591 "name": "ftl0", 00:34:46.591 "uuid": "e530c227-a306-46c3-b9a5-c9da92e0f43f" 00:34:46.591 } 00:34:46.591 23:16:24 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:34:46.591 23:16:24 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:34:46.591 23:16:24 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:34:46.591 23:16:24 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:34:46.591 [2024-12-09 23:16:25.007971] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:34:46.591 I/O size of 69632 is greater than zero copy threshold (65536). 00:34:46.591 Zero copy mechanism will not be used. 00:34:46.591 Running I/O for 4 seconds... 00:34:48.908 2974.00 IOPS, 197.49 MiB/s [2024-12-09T23:16:28.303Z] 2989.50 IOPS, 198.52 MiB/s [2024-12-09T23:16:29.235Z] 2994.33 IOPS, 198.84 MiB/s [2024-12-09T23:16:29.235Z] 2983.25 IOPS, 198.11 MiB/s 00:34:50.773 Latency(us) 00:34:50.773 [2024-12-09T23:16:29.235Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:50.773 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:34:50.773 ftl0 : 4.00 2982.02 198.02 0.00 0.00 351.73 166.20 3629.69 00:34:50.773 [2024-12-09T23:16:29.235Z] =================================================================================================================== 00:34:50.773 [2024-12-09T23:16:29.235Z] Total : 2982.02 198.02 0.00 0.00 351.73 166.20 3629.69 00:34:50.773 [2024-12-09 23:16:29.018120] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:34:50.773 { 00:34:50.773 "results": [ 00:34:50.773 { 00:34:50.773 "job": "ftl0", 00:34:50.773 "core_mask": "0x1", 00:34:50.773 "workload": "randwrite", 00:34:50.773 "status": "finished", 00:34:50.773 "queue_depth": 1, 00:34:50.773 "io_size": 69632, 00:34:50.773 "runtime": 4.00199, 00:34:50.773 "iops": 2982.016446817708, 00:34:50.773 "mibps": 198.02452967148844, 00:34:50.773 "io_failed": 0, 00:34:50.773 "io_timeout": 0, 00:34:50.773 "avg_latency_us": 351.7330301272383, 00:34:50.773 "min_latency_us": 166.20307692307694, 00:34:50.773 "max_latency_us": 3629.686153846154 00:34:50.773 } 00:34:50.773 ], 00:34:50.773 "core_count": 1 00:34:50.773 } 00:34:50.773 23:16:29 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:34:50.773 [2024-12-09 23:16:29.129203] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:34:50.773 Running I/O for 4 seconds... 00:34:52.677 11322.00 IOPS, 44.23 MiB/s [2024-12-09T23:16:32.521Z] 10831.50 IOPS, 42.31 MiB/s [2024-12-09T23:16:33.464Z] 10572.67 IOPS, 41.30 MiB/s [2024-12-09T23:16:33.464Z] 10300.00 IOPS, 40.23 MiB/s 00:34:55.002 Latency(us) 00:34:55.002 [2024-12-09T23:16:33.464Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:55.002 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:34:55.002 ftl0 : 4.02 10290.23 40.20 0.00 0.00 12413.40 231.58 31658.93 00:34:55.002 [2024-12-09T23:16:33.464Z] =================================================================================================================== 00:34:55.002 [2024-12-09T23:16:33.464Z] Total : 10290.23 40.20 0.00 0.00 12413.40 0.00 31658.93 00:34:55.002 [2024-12-09 23:16:33.154079] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:34:55.002 { 00:34:55.002 "results": [ 00:34:55.002 { 00:34:55.002 "job": "ftl0", 00:34:55.002 "core_mask": "0x1", 00:34:55.002 "workload": "randwrite", 00:34:55.002 "status": "finished", 00:34:55.002 "queue_depth": 128, 00:34:55.002 "io_size": 4096, 00:34:55.002 "runtime": 4.016238, 00:34:55.002 "iops": 10290.226824207131, 00:34:55.002 "mibps": 40.19619853205911, 00:34:55.002 "io_failed": 0, 00:34:55.002 "io_timeout": 0, 00:34:55.002 "avg_latency_us": 12413.40291015218, 00:34:55.002 "min_latency_us": 231.58153846153846, 00:34:55.002 "max_latency_us": 31658.92923076923 00:34:55.002 } 00:34:55.002 ], 00:34:55.002 "core_count": 1 00:34:55.002 } 00:34:55.002 23:16:33 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:34:55.002 [2024-12-09 23:16:33.264817] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:34:55.002 Running I/O for 4 seconds... 00:34:56.881 8321.00 IOPS, 32.50 MiB/s [2024-12-09T23:16:36.279Z] 8464.50 IOPS, 33.06 MiB/s [2024-12-09T23:16:37.276Z] 8604.67 IOPS, 33.61 MiB/s [2024-12-09T23:16:37.534Z] 8659.00 IOPS, 33.82 MiB/s 00:34:59.072 Latency(us) 00:34:59.072 [2024-12-09T23:16:37.534Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:59.072 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:34:59.072 Verification LBA range: start 0x0 length 0x1400000 00:34:59.072 ftl0 : 4.01 8671.41 33.87 0.00 0.00 14715.52 277.27 25407.80 00:34:59.072 [2024-12-09T23:16:37.534Z] =================================================================================================================== 00:34:59.072 [2024-12-09T23:16:37.534Z] Total : 8671.41 33.87 0.00 0.00 14715.52 0.00 25407.80 00:34:59.072 [2024-12-09 23:16:37.288122] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:34:59.072 { 00:34:59.072 "results": [ 00:34:59.072 { 00:34:59.072 "job": "ftl0", 00:34:59.072 "core_mask": "0x1", 00:34:59.072 "workload": "verify", 00:34:59.072 "status": "finished", 00:34:59.072 "verify_range": { 00:34:59.072 "start": 0, 00:34:59.072 "length": 20971520 00:34:59.072 }, 00:34:59.072 "queue_depth": 128, 00:34:59.072 "io_size": 4096, 00:34:59.072 "runtime": 4.008804, 00:34:59.072 "iops": 8671.414217307705, 00:34:59.072 "mibps": 33.872711786358224, 00:34:59.072 "io_failed": 0, 00:34:59.072 "io_timeout": 0, 00:34:59.072 "avg_latency_us": 14715.519204082266, 00:34:59.072 "min_latency_us": 277.2676923076923, 00:34:59.072 "max_latency_us": 25407.803076923075 00:34:59.072 } 00:34:59.072 ], 00:34:59.072 "core_count": 1 00:34:59.072 } 00:34:59.072 23:16:37 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:34:59.072 [2024-12-09 23:16:37.493765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:59.072 [2024-12-09 23:16:37.493817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:34:59.072 [2024-12-09 23:16:37.493830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:34:59.072 [2024-12-09 23:16:37.493840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:59.072 [2024-12-09 23:16:37.493861] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:34:59.072 [2024-12-09 23:16:37.496439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:59.072 [2024-12-09 23:16:37.496478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:34:59.072 [2024-12-09 23:16:37.496490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.560 ms 00:34:59.072 [2024-12-09 23:16:37.496498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:59.072 [2024-12-09 23:16:37.498146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:59.072 [2024-12-09 23:16:37.498178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:34:59.072 [2024-12-09 23:16:37.498192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.628 ms 00:34:59.072 [2024-12-09 23:16:37.498199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:59.332 [2024-12-09 23:16:37.638662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:59.332 [2024-12-09 23:16:37.638702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:34:59.332 [2024-12-09 23:16:37.638718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 140.441 ms 00:34:59.332 [2024-12-09 23:16:37.638726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:59.332 [2024-12-09 23:16:37.644832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:59.332 [2024-12-09 23:16:37.644858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:34:59.332 [2024-12-09 23:16:37.644870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.075 ms 00:34:59.332 [2024-12-09 23:16:37.644881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:59.332 [2024-12-09 23:16:37.667791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:59.332 [2024-12-09 23:16:37.667920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:34:59.332 [2024-12-09 23:16:37.667939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.852 ms 00:34:59.332 [2024-12-09 23:16:37.667946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:59.332 [2024-12-09 23:16:37.682138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:59.332 [2024-12-09 23:16:37.682173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:34:59.332 [2024-12-09 23:16:37.682188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.158 ms 00:34:59.332 [2024-12-09 23:16:37.682196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:59.332 [2024-12-09 23:16:37.682345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:59.332 [2024-12-09 23:16:37.682356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:34:59.332 [2024-12-09 23:16:37.682387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:34:59.332 [2024-12-09 23:16:37.682395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:59.332 [2024-12-09 23:16:37.705239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:59.332 [2024-12-09 23:16:37.705383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:34:59.332 [2024-12-09 23:16:37.705403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.826 ms 00:34:59.332 [2024-12-09 23:16:37.705410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:59.332 [2024-12-09 23:16:37.727533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:59.332 [2024-12-09 23:16:37.727565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:34:59.332 [2024-12-09 23:16:37.727578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.090 ms 00:34:59.332 [2024-12-09 23:16:37.727586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:59.332 [2024-12-09 23:16:37.749179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:59.332 [2024-12-09 23:16:37.749207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:34:59.332 [2024-12-09 23:16:37.749227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.557 ms 00:34:59.332 [2024-12-09 23:16:37.749235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:59.332 [2024-12-09 23:16:37.770534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:59.332 [2024-12-09 23:16:37.770566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:34:59.332 [2024-12-09 23:16:37.770580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.225 ms 00:34:59.332 [2024-12-09 23:16:37.770588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:59.332 [2024-12-09 23:16:37.770621] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:34:59.332 [2024-12-09 23:16:37.770634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:34:59.332 [2024-12-09 23:16:37.770646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:34:59.332 [2024-12-09 23:16:37.770654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:34:59.332 [2024-12-09 23:16:37.770664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:34:59.332 [2024-12-09 23:16:37.770671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:34:59.332 [2024-12-09 23:16:37.770681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:34:59.332 [2024-12-09 23:16:37.770689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:34:59.332 [2024-12-09 23:16:37.770698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:34:59.332 [2024-12-09 23:16:37.770706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:34:59.332 [2024-12-09 23:16:37.770715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:34:59.332 [2024-12-09 23:16:37.770722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:34:59.332 [2024-12-09 23:16:37.770731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:34:59.332 [2024-12-09 23:16:37.770738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:34:59.332 [2024-12-09 23:16:37.770749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:34:59.332 [2024-12-09 23:16:37.770757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:34:59.332 [2024-12-09 23:16:37.770766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:34:59.332 [2024-12-09 23:16:37.770773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:34:59.332 [2024-12-09 23:16:37.770782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:34:59.332 [2024-12-09 23:16:37.770789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:34:59.332 [2024-12-09 23:16:37.770800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:34:59.333 [2024-12-09 23:16:37.770808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:34:59.333 [2024-12-09 23:16:37.770817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:34:59.333 [2024-12-09 23:16:37.770824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:34:59.333 [2024-12-09 23:16:37.770833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:34:59.333 [2024-12-09 23:16:37.770841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:34:59.333 [2024-12-09 23:16:37.770850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:34:59.333 [2024-12-09 23:16:37.770858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:34:59.333 [2024-12-09 23:16:37.770867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:34:59.333 [2024-12-09 23:16:37.770874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:34:59.333 [2024-12-09 23:16:37.770885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:34:59.333 [2024-12-09 23:16:37.770894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:34:59.333 [2024-12-09 23:16:37.770904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:34:59.333 [2024-12-09 23:16:37.770911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:34:59.333 [2024-12-09 23:16:37.770920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:34:59.333 [2024-12-09 23:16:37.770928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:34:59.333 [2024-12-09 23:16:37.770936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:34:59.333 [2024-12-09 23:16:37.770944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:34:59.333 [2024-12-09 23:16:37.770953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:34:59.333 [2024-12-09 23:16:37.770965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:34:59.333 [2024-12-09 23:16:37.770974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:34:59.333 [2024-12-09 23:16:37.770981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:34:59.333 [2024-12-09 23:16:37.770991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:34:59.333 [2024-12-09 23:16:37.770998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:34:59.333 [2024-12-09 23:16:37.771007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:34:59.333 [2024-12-09 23:16:37.771015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:34:59.333 [2024-12-09 23:16:37.771027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:34:59.333 [2024-12-09 23:16:37.771034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:34:59.333 [2024-12-09 23:16:37.771043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:34:59.333 [2024-12-09 23:16:37.771051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:34:59.333 [2024-12-09 23:16:37.771060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:34:59.333 [2024-12-09 23:16:37.771067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:34:59.333 [2024-12-09 23:16:37.771077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:34:59.333 [2024-12-09 23:16:37.771084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:34:59.333 [2024-12-09 23:16:37.771093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:34:59.333 [2024-12-09 23:16:37.771101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:34:59.333 [2024-12-09 23:16:37.771111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:34:59.333 [2024-12-09 23:16:37.771118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:34:59.333 [2024-12-09 23:16:37.771128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:34:59.333 [2024-12-09 23:16:37.771135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:34:59.333 [2024-12-09 23:16:37.771144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:34:59.333 [2024-12-09 23:16:37.771151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:34:59.333 [2024-12-09 23:16:37.771173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:34:59.333 [2024-12-09 23:16:37.771182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:34:59.333 [2024-12-09 23:16:37.771190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:34:59.333 [2024-12-09 23:16:37.771198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:34:59.333 [2024-12-09 23:16:37.771207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:34:59.333 [2024-12-09 23:16:37.771215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:34:59.333 [2024-12-09 23:16:37.771234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:34:59.333 [2024-12-09 23:16:37.771242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:34:59.333 [2024-12-09 23:16:37.771251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:34:59.333 [2024-12-09 23:16:37.771258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:34:59.333 [2024-12-09 23:16:37.771268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:34:59.333 [2024-12-09 23:16:37.771276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:34:59.333 [2024-12-09 23:16:37.771285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:34:59.333 [2024-12-09 23:16:37.771292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:34:59.333 [2024-12-09 23:16:37.771301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:34:59.333 [2024-12-09 23:16:37.771314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:34:59.333 [2024-12-09 23:16:37.771325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:34:59.333 [2024-12-09 23:16:37.771332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:34:59.333 [2024-12-09 23:16:37.771341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:34:59.333 [2024-12-09 23:16:37.771348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:34:59.333 [2024-12-09 23:16:37.771357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:34:59.333 [2024-12-09 23:16:37.771365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:34:59.333 [2024-12-09 23:16:37.771374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:34:59.333 [2024-12-09 23:16:37.771381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:34:59.333 [2024-12-09 23:16:37.771390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:34:59.333 [2024-12-09 23:16:37.771397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:34:59.333 [2024-12-09 23:16:37.771405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:34:59.333 [2024-12-09 23:16:37.771413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:34:59.333 [2024-12-09 23:16:37.771421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:34:59.333 [2024-12-09 23:16:37.771429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:34:59.333 [2024-12-09 23:16:37.771437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:34:59.333 [2024-12-09 23:16:37.771445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:34:59.333 [2024-12-09 23:16:37.771455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:34:59.333 [2024-12-09 23:16:37.771463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:34:59.333 [2024-12-09 23:16:37.771472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:34:59.333 [2024-12-09 23:16:37.771480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:34:59.333 [2024-12-09 23:16:37.771490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:34:59.333 [2024-12-09 23:16:37.771497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:34:59.333 [2024-12-09 23:16:37.771506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:34:59.333 [2024-12-09 23:16:37.771521] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:34:59.333 [2024-12-09 23:16:37.771531] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e530c227-a306-46c3-b9a5-c9da92e0f43f 00:34:59.333 [2024-12-09 23:16:37.771540] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:34:59.333 [2024-12-09 23:16:37.771549] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:34:59.333 [2024-12-09 23:16:37.771555] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:34:59.333 [2024-12-09 23:16:37.771564] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:34:59.333 [2024-12-09 23:16:37.771570] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:34:59.333 [2024-12-09 23:16:37.771579] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:34:59.333 [2024-12-09 23:16:37.771586] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:34:59.333 [2024-12-09 23:16:37.771595] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:34:59.333 [2024-12-09 23:16:37.771601] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:34:59.333 [2024-12-09 23:16:37.771610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:59.333 [2024-12-09 23:16:37.771617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:34:59.333 [2024-12-09 23:16:37.771627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.991 ms 00:34:59.333 [2024-12-09 23:16:37.771635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:59.334 [2024-12-09 23:16:37.783863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:59.334 [2024-12-09 23:16:37.783890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:34:59.334 [2024-12-09 23:16:37.783902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.199 ms 00:34:59.334 [2024-12-09 23:16:37.783910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:59.334 [2024-12-09 23:16:37.784254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:59.334 [2024-12-09 23:16:37.784267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:34:59.334 [2024-12-09 23:16:37.784277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.326 ms 00:34:59.334 [2024-12-09 23:16:37.784284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:59.592 [2024-12-09 23:16:37.818404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:59.592 [2024-12-09 23:16:37.818445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:34:59.592 [2024-12-09 23:16:37.818466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:59.592 [2024-12-09 23:16:37.818474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:59.592 [2024-12-09 23:16:37.818537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:59.592 [2024-12-09 23:16:37.818545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:34:59.592 [2024-12-09 23:16:37.818554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:59.592 [2024-12-09 23:16:37.818561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:59.592 [2024-12-09 23:16:37.818651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:59.592 [2024-12-09 23:16:37.818661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:34:59.592 [2024-12-09 23:16:37.818671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:59.592 [2024-12-09 23:16:37.818678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:59.592 [2024-12-09 23:16:37.818694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:59.592 [2024-12-09 23:16:37.818702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:34:59.592 [2024-12-09 23:16:37.818710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:59.592 [2024-12-09 23:16:37.818717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:59.592 [2024-12-09 23:16:37.896563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:59.592 [2024-12-09 23:16:37.896615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:34:59.592 [2024-12-09 23:16:37.896631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:59.592 [2024-12-09 23:16:37.896639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:59.592 [2024-12-09 23:16:37.959085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:59.592 [2024-12-09 23:16:37.959128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:34:59.592 [2024-12-09 23:16:37.959142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:59.592 [2024-12-09 23:16:37.959150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:59.592 [2024-12-09 23:16:37.959238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:59.592 [2024-12-09 23:16:37.959249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:34:59.592 [2024-12-09 23:16:37.959258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:59.592 [2024-12-09 23:16:37.959266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:59.592 [2024-12-09 23:16:37.959325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:59.592 [2024-12-09 23:16:37.959335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:34:59.592 [2024-12-09 23:16:37.959344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:59.592 [2024-12-09 23:16:37.959351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:59.592 [2024-12-09 23:16:37.959440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:59.592 [2024-12-09 23:16:37.959454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:34:59.592 [2024-12-09 23:16:37.959470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:59.592 [2024-12-09 23:16:37.959482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:59.592 [2024-12-09 23:16:37.959524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:59.592 [2024-12-09 23:16:37.959534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:34:59.592 [2024-12-09 23:16:37.959544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:59.592 [2024-12-09 23:16:37.959551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:59.592 [2024-12-09 23:16:37.959584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:59.592 [2024-12-09 23:16:37.959595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:34:59.592 [2024-12-09 23:16:37.959604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:59.592 [2024-12-09 23:16:37.959617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:59.592 [2024-12-09 23:16:37.959657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:59.592 [2024-12-09 23:16:37.959666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:34:59.592 [2024-12-09 23:16:37.959675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:59.592 [2024-12-09 23:16:37.959682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:59.592 [2024-12-09 23:16:37.959798] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 465.999 ms, result 0 00:34:59.592 true 00:34:59.592 23:16:37 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 76272 00:34:59.592 23:16:37 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 76272 ']' 00:34:59.592 23:16:37 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # kill -0 76272 00:34:59.592 23:16:37 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # uname 00:34:59.592 23:16:37 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:59.592 23:16:37 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76272 00:34:59.592 killing process with pid 76272 00:34:59.592 Received shutdown signal, test time was about 4.000000 seconds 00:34:59.592 00:34:59.592 Latency(us) 00:34:59.592 [2024-12-09T23:16:38.054Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:59.592 [2024-12-09T23:16:38.054Z] =================================================================================================================== 00:34:59.592 [2024-12-09T23:16:38.054Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:59.592 23:16:38 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:59.592 23:16:38 ftl.ftl_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:59.592 23:16:38 ftl.ftl_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76272' 00:34:59.593 23:16:38 ftl.ftl_bdevperf -- common/autotest_common.sh@973 -- # kill 76272 00:34:59.593 23:16:38 ftl.ftl_bdevperf -- common/autotest_common.sh@978 -- # wait 76272 00:35:00.524 23:16:38 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:35:00.524 23:16:38 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:35:00.524 Remove shared memory files 00:35:00.524 23:16:38 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:35:00.524 23:16:38 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:35:00.524 23:16:38 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:35:00.524 23:16:38 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:35:00.524 23:16:38 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:35:00.524 23:16:38 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:35:00.524 00:35:00.524 real 0m21.192s 00:35:00.524 user 0m23.984s 00:35:00.524 sys 0m0.834s 00:35:00.524 23:16:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:00.524 23:16:38 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:00.524 ************************************ 00:35:00.524 END TEST ftl_bdevperf 00:35:00.524 ************************************ 00:35:00.524 23:16:38 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:35:00.524 23:16:38 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:35:00.524 23:16:38 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:00.524 23:16:38 ftl -- common/autotest_common.sh@10 -- # set +x 00:35:00.524 ************************************ 00:35:00.524 START TEST ftl_trim 00:35:00.524 ************************************ 00:35:00.524 23:16:38 ftl.ftl_trim -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:35:00.782 * Looking for test storage... 00:35:00.782 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:35:00.782 23:16:39 ftl.ftl_trim -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:35:00.782 23:16:39 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:35:00.782 23:16:39 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # lcov --version 00:35:00.782 23:16:39 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:35:00.782 23:16:39 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:00.782 23:16:39 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:00.782 23:16:39 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:00.782 23:16:39 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:35:00.782 23:16:39 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:35:00.782 23:16:39 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:35:00.782 23:16:39 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:35:00.782 23:16:39 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:35:00.782 23:16:39 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:35:00.782 23:16:39 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:35:00.782 23:16:39 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:00.782 23:16:39 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:35:00.782 23:16:39 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:35:00.782 23:16:39 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:00.782 23:16:39 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:00.782 23:16:39 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:35:00.782 23:16:39 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:35:00.782 23:16:39 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:00.782 23:16:39 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:35:00.782 23:16:39 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:35:00.782 23:16:39 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:35:00.782 23:16:39 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:35:00.782 23:16:39 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:00.782 23:16:39 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:35:00.782 23:16:39 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:35:00.782 23:16:39 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:00.782 23:16:39 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:00.782 23:16:39 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:35:00.782 23:16:39 ftl.ftl_trim -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:00.782 23:16:39 ftl.ftl_trim -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:35:00.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:00.782 --rc genhtml_branch_coverage=1 00:35:00.782 --rc genhtml_function_coverage=1 00:35:00.782 --rc genhtml_legend=1 00:35:00.782 --rc geninfo_all_blocks=1 00:35:00.782 --rc geninfo_unexecuted_blocks=1 00:35:00.782 00:35:00.782 ' 00:35:00.782 23:16:39 ftl.ftl_trim -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:35:00.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:00.782 --rc genhtml_branch_coverage=1 00:35:00.782 --rc genhtml_function_coverage=1 00:35:00.782 --rc genhtml_legend=1 00:35:00.782 --rc geninfo_all_blocks=1 00:35:00.782 --rc geninfo_unexecuted_blocks=1 00:35:00.782 00:35:00.782 ' 00:35:00.782 23:16:39 ftl.ftl_trim -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:35:00.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:00.782 --rc genhtml_branch_coverage=1 00:35:00.782 --rc genhtml_function_coverage=1 00:35:00.782 --rc genhtml_legend=1 00:35:00.782 --rc geninfo_all_blocks=1 00:35:00.782 --rc geninfo_unexecuted_blocks=1 00:35:00.782 00:35:00.782 ' 00:35:00.782 23:16:39 ftl.ftl_trim -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:35:00.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:00.782 --rc genhtml_branch_coverage=1 00:35:00.782 --rc genhtml_function_coverage=1 00:35:00.782 --rc genhtml_legend=1 00:35:00.782 --rc geninfo_all_blocks=1 00:35:00.782 --rc geninfo_unexecuted_blocks=1 00:35:00.782 00:35:00.782 ' 00:35:00.782 23:16:39 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:35:00.782 23:16:39 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:35:00.782 23:16:39 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:35:00.782 23:16:39 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:35:00.782 23:16:39 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:35:00.782 23:16:39 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:35:00.782 23:16:39 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:35:00.782 23:16:39 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:35:00.782 23:16:39 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:35:00.782 23:16:39 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:35:00.782 23:16:39 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:35:00.782 23:16:39 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:35:00.782 23:16:39 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:35:00.782 23:16:39 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:35:00.782 23:16:39 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:35:00.782 23:16:39 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:35:00.782 23:16:39 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:35:00.782 23:16:39 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:35:00.782 23:16:39 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:35:00.782 23:16:39 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:35:00.782 23:16:39 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:35:00.782 23:16:39 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:35:00.782 23:16:39 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:35:00.782 23:16:39 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:35:00.782 23:16:39 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:35:00.782 23:16:39 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:35:00.782 23:16:39 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:35:00.782 23:16:39 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:35:00.782 23:16:39 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:35:00.782 23:16:39 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:35:00.782 23:16:39 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:35:00.782 23:16:39 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:35:00.782 23:16:39 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:35:00.782 23:16:39 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:35:00.782 23:16:39 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:35:00.782 23:16:39 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:35:00.782 23:16:39 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:35:00.782 23:16:39 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:35:00.782 23:16:39 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:35:00.782 23:16:39 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:35:00.782 23:16:39 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:35:00.782 23:16:39 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=76609 00:35:00.782 23:16:39 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 76609 00:35:00.782 23:16:39 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 76609 ']' 00:35:00.782 23:16:39 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:35:00.782 23:16:39 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:00.782 23:16:39 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:00.782 23:16:39 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:00.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:00.782 23:16:39 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:00.782 23:16:39 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:35:00.782 [2024-12-09 23:16:39.183261] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:35:00.782 [2024-12-09 23:16:39.183378] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76609 ] 00:35:01.039 [2024-12-09 23:16:39.342042] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:01.039 [2024-12-09 23:16:39.448258] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:01.039 [2024-12-09 23:16:39.448347] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:01.039 [2024-12-09 23:16:39.448369] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:01.604 23:16:40 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:01.604 23:16:40 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:35:01.604 23:16:40 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:35:01.604 23:16:40 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:35:01.604 23:16:40 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:35:01.604 23:16:40 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:35:01.604 23:16:40 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:35:01.604 23:16:40 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:35:02.169 23:16:40 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:35:02.169 23:16:40 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:35:02.169 23:16:40 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:35:02.169 23:16:40 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:35:02.169 23:16:40 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:35:02.169 23:16:40 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:35:02.169 23:16:40 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:35:02.169 23:16:40 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:35:02.169 23:16:40 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:35:02.169 { 00:35:02.169 "name": "nvme0n1", 00:35:02.169 "aliases": [ 00:35:02.169 "4f58f93f-659c-4de1-9186-f55c1bc2d2a6" 00:35:02.169 ], 00:35:02.169 "product_name": "NVMe disk", 00:35:02.169 "block_size": 4096, 00:35:02.169 "num_blocks": 1310720, 00:35:02.169 "uuid": "4f58f93f-659c-4de1-9186-f55c1bc2d2a6", 00:35:02.169 "numa_id": -1, 00:35:02.169 "assigned_rate_limits": { 00:35:02.169 "rw_ios_per_sec": 0, 00:35:02.169 "rw_mbytes_per_sec": 0, 00:35:02.169 "r_mbytes_per_sec": 0, 00:35:02.169 "w_mbytes_per_sec": 0 00:35:02.169 }, 00:35:02.169 "claimed": true, 00:35:02.169 "claim_type": "read_many_write_one", 00:35:02.169 "zoned": false, 00:35:02.169 "supported_io_types": { 00:35:02.169 "read": true, 00:35:02.169 "write": true, 00:35:02.169 "unmap": true, 00:35:02.169 "flush": true, 00:35:02.169 "reset": true, 00:35:02.169 "nvme_admin": true, 00:35:02.169 "nvme_io": true, 00:35:02.169 "nvme_io_md": false, 00:35:02.169 "write_zeroes": true, 00:35:02.169 "zcopy": false, 00:35:02.169 "get_zone_info": false, 00:35:02.169 "zone_management": false, 00:35:02.169 "zone_append": false, 00:35:02.169 "compare": true, 00:35:02.169 "compare_and_write": false, 00:35:02.169 "abort": true, 00:35:02.169 "seek_hole": false, 00:35:02.169 "seek_data": false, 00:35:02.169 "copy": true, 00:35:02.169 "nvme_iov_md": false 00:35:02.169 }, 00:35:02.169 "driver_specific": { 00:35:02.169 "nvme": [ 00:35:02.169 { 00:35:02.169 "pci_address": "0000:00:11.0", 00:35:02.169 "trid": { 00:35:02.169 "trtype": "PCIe", 00:35:02.169 "traddr": "0000:00:11.0" 00:35:02.169 }, 00:35:02.169 "ctrlr_data": { 00:35:02.169 "cntlid": 0, 00:35:02.169 "vendor_id": "0x1b36", 00:35:02.169 "model_number": "QEMU NVMe Ctrl", 00:35:02.169 "serial_number": "12341", 00:35:02.169 "firmware_revision": "8.0.0", 00:35:02.169 "subnqn": "nqn.2019-08.org.qemu:12341", 00:35:02.169 "oacs": { 00:35:02.169 "security": 0, 00:35:02.169 "format": 1, 00:35:02.169 "firmware": 0, 00:35:02.169 "ns_manage": 1 00:35:02.169 }, 00:35:02.169 "multi_ctrlr": false, 00:35:02.169 "ana_reporting": false 00:35:02.169 }, 00:35:02.169 "vs": { 00:35:02.169 "nvme_version": "1.4" 00:35:02.169 }, 00:35:02.169 "ns_data": { 00:35:02.169 "id": 1, 00:35:02.169 "can_share": false 00:35:02.169 } 00:35:02.169 } 00:35:02.169 ], 00:35:02.169 "mp_policy": "active_passive" 00:35:02.169 } 00:35:02.169 } 00:35:02.169 ]' 00:35:02.169 23:16:40 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:35:02.169 23:16:40 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:35:02.169 23:16:40 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:35:02.169 23:16:40 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=1310720 00:35:02.169 23:16:40 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:35:02.169 23:16:40 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 5120 00:35:02.169 23:16:40 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:35:02.169 23:16:40 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:35:02.169 23:16:40 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:35:02.169 23:16:40 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:35:02.169 23:16:40 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:35:02.426 23:16:40 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=b60fa0e2-1b68-4a91-8c43-7a20123d87d2 00:35:02.426 23:16:40 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:35:02.426 23:16:40 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b60fa0e2-1b68-4a91-8c43-7a20123d87d2 00:35:02.684 23:16:41 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:35:02.941 23:16:41 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=a6a0c9f7-00d6-434f-9541-c735c2a3f4ce 00:35:02.941 23:16:41 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u a6a0c9f7-00d6-434f-9541-c735c2a3f4ce 00:35:03.198 23:16:41 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=df5faf09-1dbd-4dfd-ad51-fab2e0087fbb 00:35:03.198 23:16:41 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 df5faf09-1dbd-4dfd-ad51-fab2e0087fbb 00:35:03.198 23:16:41 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:35:03.198 23:16:41 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:35:03.198 23:16:41 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=df5faf09-1dbd-4dfd-ad51-fab2e0087fbb 00:35:03.198 23:16:41 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:35:03.198 23:16:41 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size df5faf09-1dbd-4dfd-ad51-fab2e0087fbb 00:35:03.198 23:16:41 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=df5faf09-1dbd-4dfd-ad51-fab2e0087fbb 00:35:03.198 23:16:41 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:35:03.198 23:16:41 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:35:03.198 23:16:41 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:35:03.198 23:16:41 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b df5faf09-1dbd-4dfd-ad51-fab2e0087fbb 00:35:03.198 23:16:41 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:35:03.198 { 00:35:03.198 "name": "df5faf09-1dbd-4dfd-ad51-fab2e0087fbb", 00:35:03.198 "aliases": [ 00:35:03.198 "lvs/nvme0n1p0" 00:35:03.198 ], 00:35:03.198 "product_name": "Logical Volume", 00:35:03.198 "block_size": 4096, 00:35:03.198 "num_blocks": 26476544, 00:35:03.198 "uuid": "df5faf09-1dbd-4dfd-ad51-fab2e0087fbb", 00:35:03.198 "assigned_rate_limits": { 00:35:03.198 "rw_ios_per_sec": 0, 00:35:03.198 "rw_mbytes_per_sec": 0, 00:35:03.198 "r_mbytes_per_sec": 0, 00:35:03.198 "w_mbytes_per_sec": 0 00:35:03.198 }, 00:35:03.198 "claimed": false, 00:35:03.198 "zoned": false, 00:35:03.198 "supported_io_types": { 00:35:03.198 "read": true, 00:35:03.198 "write": true, 00:35:03.198 "unmap": true, 00:35:03.198 "flush": false, 00:35:03.198 "reset": true, 00:35:03.198 "nvme_admin": false, 00:35:03.198 "nvme_io": false, 00:35:03.198 "nvme_io_md": false, 00:35:03.198 "write_zeroes": true, 00:35:03.198 "zcopy": false, 00:35:03.198 "get_zone_info": false, 00:35:03.198 "zone_management": false, 00:35:03.198 "zone_append": false, 00:35:03.198 "compare": false, 00:35:03.198 "compare_and_write": false, 00:35:03.198 "abort": false, 00:35:03.198 "seek_hole": true, 00:35:03.198 "seek_data": true, 00:35:03.198 "copy": false, 00:35:03.198 "nvme_iov_md": false 00:35:03.198 }, 00:35:03.198 "driver_specific": { 00:35:03.198 "lvol": { 00:35:03.198 "lvol_store_uuid": "a6a0c9f7-00d6-434f-9541-c735c2a3f4ce", 00:35:03.198 "base_bdev": "nvme0n1", 00:35:03.198 "thin_provision": true, 00:35:03.198 "num_allocated_clusters": 0, 00:35:03.198 "snapshot": false, 00:35:03.198 "clone": false, 00:35:03.198 "esnap_clone": false 00:35:03.198 } 00:35:03.198 } 00:35:03.198 } 00:35:03.198 ]' 00:35:03.198 23:16:41 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:35:03.198 23:16:41 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:35:03.198 23:16:41 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:35:03.455 23:16:41 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:35:03.455 23:16:41 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:35:03.455 23:16:41 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:35:03.455 23:16:41 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:35:03.455 23:16:41 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:35:03.455 23:16:41 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:35:03.714 23:16:41 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:35:03.714 23:16:41 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:35:03.714 23:16:41 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size df5faf09-1dbd-4dfd-ad51-fab2e0087fbb 00:35:03.714 23:16:41 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=df5faf09-1dbd-4dfd-ad51-fab2e0087fbb 00:35:03.714 23:16:41 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:35:03.714 23:16:41 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:35:03.714 23:16:41 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:35:03.714 23:16:41 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b df5faf09-1dbd-4dfd-ad51-fab2e0087fbb 00:35:03.714 23:16:42 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:35:03.714 { 00:35:03.714 "name": "df5faf09-1dbd-4dfd-ad51-fab2e0087fbb", 00:35:03.714 "aliases": [ 00:35:03.714 "lvs/nvme0n1p0" 00:35:03.714 ], 00:35:03.714 "product_name": "Logical Volume", 00:35:03.714 "block_size": 4096, 00:35:03.714 "num_blocks": 26476544, 00:35:03.714 "uuid": "df5faf09-1dbd-4dfd-ad51-fab2e0087fbb", 00:35:03.714 "assigned_rate_limits": { 00:35:03.714 "rw_ios_per_sec": 0, 00:35:03.714 "rw_mbytes_per_sec": 0, 00:35:03.714 "r_mbytes_per_sec": 0, 00:35:03.714 "w_mbytes_per_sec": 0 00:35:03.714 }, 00:35:03.714 "claimed": false, 00:35:03.714 "zoned": false, 00:35:03.714 "supported_io_types": { 00:35:03.714 "read": true, 00:35:03.714 "write": true, 00:35:03.714 "unmap": true, 00:35:03.714 "flush": false, 00:35:03.714 "reset": true, 00:35:03.714 "nvme_admin": false, 00:35:03.714 "nvme_io": false, 00:35:03.714 "nvme_io_md": false, 00:35:03.714 "write_zeroes": true, 00:35:03.714 "zcopy": false, 00:35:03.714 "get_zone_info": false, 00:35:03.714 "zone_management": false, 00:35:03.714 "zone_append": false, 00:35:03.714 "compare": false, 00:35:03.714 "compare_and_write": false, 00:35:03.714 "abort": false, 00:35:03.714 "seek_hole": true, 00:35:03.714 "seek_data": true, 00:35:03.714 "copy": false, 00:35:03.714 "nvme_iov_md": false 00:35:03.714 }, 00:35:03.714 "driver_specific": { 00:35:03.714 "lvol": { 00:35:03.714 "lvol_store_uuid": "a6a0c9f7-00d6-434f-9541-c735c2a3f4ce", 00:35:03.714 "base_bdev": "nvme0n1", 00:35:03.714 "thin_provision": true, 00:35:03.714 "num_allocated_clusters": 0, 00:35:03.714 "snapshot": false, 00:35:03.714 "clone": false, 00:35:03.714 "esnap_clone": false 00:35:03.714 } 00:35:03.714 } 00:35:03.714 } 00:35:03.714 ]' 00:35:03.714 23:16:42 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:35:03.970 23:16:42 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:35:03.970 23:16:42 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:35:03.970 23:16:42 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:35:03.970 23:16:42 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:35:03.970 23:16:42 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:35:03.970 23:16:42 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:35:03.970 23:16:42 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:35:03.970 23:16:42 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:35:03.970 23:16:42 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:35:03.970 23:16:42 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size df5faf09-1dbd-4dfd-ad51-fab2e0087fbb 00:35:03.970 23:16:42 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=df5faf09-1dbd-4dfd-ad51-fab2e0087fbb 00:35:03.970 23:16:42 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:35:03.970 23:16:42 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:35:03.970 23:16:42 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:35:03.970 23:16:42 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b df5faf09-1dbd-4dfd-ad51-fab2e0087fbb 00:35:04.227 23:16:42 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:35:04.227 { 00:35:04.227 "name": "df5faf09-1dbd-4dfd-ad51-fab2e0087fbb", 00:35:04.227 "aliases": [ 00:35:04.227 "lvs/nvme0n1p0" 00:35:04.227 ], 00:35:04.227 "product_name": "Logical Volume", 00:35:04.227 "block_size": 4096, 00:35:04.227 "num_blocks": 26476544, 00:35:04.227 "uuid": "df5faf09-1dbd-4dfd-ad51-fab2e0087fbb", 00:35:04.227 "assigned_rate_limits": { 00:35:04.227 "rw_ios_per_sec": 0, 00:35:04.227 "rw_mbytes_per_sec": 0, 00:35:04.227 "r_mbytes_per_sec": 0, 00:35:04.227 "w_mbytes_per_sec": 0 00:35:04.227 }, 00:35:04.227 "claimed": false, 00:35:04.227 "zoned": false, 00:35:04.227 "supported_io_types": { 00:35:04.227 "read": true, 00:35:04.227 "write": true, 00:35:04.227 "unmap": true, 00:35:04.227 "flush": false, 00:35:04.228 "reset": true, 00:35:04.228 "nvme_admin": false, 00:35:04.228 "nvme_io": false, 00:35:04.228 "nvme_io_md": false, 00:35:04.228 "write_zeroes": true, 00:35:04.228 "zcopy": false, 00:35:04.228 "get_zone_info": false, 00:35:04.228 "zone_management": false, 00:35:04.228 "zone_append": false, 00:35:04.228 "compare": false, 00:35:04.228 "compare_and_write": false, 00:35:04.228 "abort": false, 00:35:04.228 "seek_hole": true, 00:35:04.228 "seek_data": true, 00:35:04.228 "copy": false, 00:35:04.228 "nvme_iov_md": false 00:35:04.228 }, 00:35:04.228 "driver_specific": { 00:35:04.228 "lvol": { 00:35:04.228 "lvol_store_uuid": "a6a0c9f7-00d6-434f-9541-c735c2a3f4ce", 00:35:04.228 "base_bdev": "nvme0n1", 00:35:04.228 "thin_provision": true, 00:35:04.228 "num_allocated_clusters": 0, 00:35:04.228 "snapshot": false, 00:35:04.228 "clone": false, 00:35:04.228 "esnap_clone": false 00:35:04.228 } 00:35:04.228 } 00:35:04.228 } 00:35:04.228 ]' 00:35:04.228 23:16:42 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:35:04.228 23:16:42 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:35:04.228 23:16:42 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:35:04.228 23:16:42 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:35:04.228 23:16:42 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:35:04.228 23:16:42 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:35:04.228 23:16:42 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:35:04.228 23:16:42 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d df5faf09-1dbd-4dfd-ad51-fab2e0087fbb -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:35:04.490 [2024-12-09 23:16:42.858192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:04.490 [2024-12-09 23:16:42.858243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:35:04.490 [2024-12-09 23:16:42.858258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:35:04.490 [2024-12-09 23:16:42.858266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:04.490 [2024-12-09 23:16:42.860535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:04.490 [2024-12-09 23:16:42.860565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:35:04.490 [2024-12-09 23:16:42.860575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.247 ms 00:35:04.490 [2024-12-09 23:16:42.860581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:04.490 [2024-12-09 23:16:42.861004] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:35:04.490 [2024-12-09 23:16:42.862607] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:35:04.490 [2024-12-09 23:16:42.862693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:04.490 [2024-12-09 23:16:42.862719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:35:04.490 [2024-12-09 23:16:42.862748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.712 ms 00:35:04.490 [2024-12-09 23:16:42.862768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:04.490 [2024-12-09 23:16:42.863033] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 8d08cbd7-a528-4f3d-b495-445a47785ac7 00:35:04.490 [2024-12-09 23:16:42.864800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:04.490 [2024-12-09 23:16:42.864873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:35:04.490 [2024-12-09 23:16:42.864900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:35:04.490 [2024-12-09 23:16:42.864924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:04.490 [2024-12-09 23:16:42.872591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:04.490 [2024-12-09 23:16:42.872662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:35:04.490 [2024-12-09 23:16:42.872690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.476 ms 00:35:04.490 [2024-12-09 23:16:42.872714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:04.490 [2024-12-09 23:16:42.873015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:04.490 [2024-12-09 23:16:42.873048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:35:04.490 [2024-12-09 23:16:42.873070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.174 ms 00:35:04.490 [2024-12-09 23:16:42.873101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:04.490 [2024-12-09 23:16:42.873169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:04.490 [2024-12-09 23:16:42.873194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:35:04.490 [2024-12-09 23:16:42.873241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:35:04.490 [2024-12-09 23:16:42.873271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:04.490 [2024-12-09 23:16:42.873388] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:35:04.490 [2024-12-09 23:16:42.877092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:04.490 [2024-12-09 23:16:42.877122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:35:04.490 [2024-12-09 23:16:42.877135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.716 ms 00:35:04.490 [2024-12-09 23:16:42.877142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:04.490 [2024-12-09 23:16:42.877198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:04.490 [2024-12-09 23:16:42.877232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:35:04.490 [2024-12-09 23:16:42.877242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:35:04.490 [2024-12-09 23:16:42.877249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:04.490 [2024-12-09 23:16:42.877276] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:35:04.490 [2024-12-09 23:16:42.877426] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:35:04.490 [2024-12-09 23:16:42.877441] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:35:04.490 [2024-12-09 23:16:42.877452] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:35:04.490 [2024-12-09 23:16:42.877463] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:35:04.490 [2024-12-09 23:16:42.877471] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:35:04.490 [2024-12-09 23:16:42.877480] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:35:04.490 [2024-12-09 23:16:42.877487] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:35:04.490 [2024-12-09 23:16:42.877497] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:35:04.490 [2024-12-09 23:16:42.877506] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:35:04.490 [2024-12-09 23:16:42.877515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:04.490 [2024-12-09 23:16:42.877522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:35:04.490 [2024-12-09 23:16:42.877531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.240 ms 00:35:04.490 [2024-12-09 23:16:42.877538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:04.490 [2024-12-09 23:16:42.877638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:04.490 [2024-12-09 23:16:42.877646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:35:04.490 [2024-12-09 23:16:42.877656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:35:04.490 [2024-12-09 23:16:42.877662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:04.490 [2024-12-09 23:16:42.877795] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:35:04.490 [2024-12-09 23:16:42.877804] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:35:04.490 [2024-12-09 23:16:42.877813] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:35:04.490 [2024-12-09 23:16:42.877821] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:04.490 [2024-12-09 23:16:42.877830] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:35:04.490 [2024-12-09 23:16:42.877836] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:35:04.490 [2024-12-09 23:16:42.877844] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:35:04.490 [2024-12-09 23:16:42.877851] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:35:04.490 [2024-12-09 23:16:42.877859] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:35:04.490 [2024-12-09 23:16:42.877865] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:35:04.490 [2024-12-09 23:16:42.877875] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:35:04.490 [2024-12-09 23:16:42.877881] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:35:04.490 [2024-12-09 23:16:42.877889] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:35:04.490 [2024-12-09 23:16:42.877896] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:35:04.490 [2024-12-09 23:16:42.877904] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:35:04.490 [2024-12-09 23:16:42.877910] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:04.491 [2024-12-09 23:16:42.877919] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:35:04.491 [2024-12-09 23:16:42.877926] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:35:04.491 [2024-12-09 23:16:42.877934] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:04.491 [2024-12-09 23:16:42.877941] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:35:04.491 [2024-12-09 23:16:42.877950] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:35:04.491 [2024-12-09 23:16:42.877956] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:35:04.491 [2024-12-09 23:16:42.877964] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:35:04.491 [2024-12-09 23:16:42.877970] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:35:04.491 [2024-12-09 23:16:42.877978] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:35:04.491 [2024-12-09 23:16:42.877985] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:35:04.491 [2024-12-09 23:16:42.877993] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:35:04.491 [2024-12-09 23:16:42.877999] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:35:04.491 [2024-12-09 23:16:42.878007] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:35:04.491 [2024-12-09 23:16:42.878014] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:35:04.491 [2024-12-09 23:16:42.878022] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:35:04.491 [2024-12-09 23:16:42.878028] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:35:04.491 [2024-12-09 23:16:42.878038] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:35:04.491 [2024-12-09 23:16:42.878044] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:35:04.491 [2024-12-09 23:16:42.878052] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:35:04.491 [2024-12-09 23:16:42.878058] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:35:04.491 [2024-12-09 23:16:42.878067] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:35:04.491 [2024-12-09 23:16:42.878074] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:35:04.491 [2024-12-09 23:16:42.878082] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:35:04.491 [2024-12-09 23:16:42.878088] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:04.491 [2024-12-09 23:16:42.878096] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:35:04.491 [2024-12-09 23:16:42.878102] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:35:04.491 [2024-12-09 23:16:42.878110] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:04.491 [2024-12-09 23:16:42.878116] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:35:04.491 [2024-12-09 23:16:42.878125] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:35:04.491 [2024-12-09 23:16:42.878132] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:35:04.491 [2024-12-09 23:16:42.878140] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:04.491 [2024-12-09 23:16:42.878148] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:35:04.491 [2024-12-09 23:16:42.878157] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:35:04.491 [2024-12-09 23:16:42.878163] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:35:04.491 [2024-12-09 23:16:42.878172] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:35:04.491 [2024-12-09 23:16:42.878178] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:35:04.491 [2024-12-09 23:16:42.878187] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:35:04.491 [2024-12-09 23:16:42.878196] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:35:04.491 [2024-12-09 23:16:42.878206] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:35:04.491 [2024-12-09 23:16:42.878228] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:35:04.491 [2024-12-09 23:16:42.878238] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:35:04.491 [2024-12-09 23:16:42.878245] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:35:04.491 [2024-12-09 23:16:42.878254] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:35:04.491 [2024-12-09 23:16:42.878261] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:35:04.491 [2024-12-09 23:16:42.878269] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:35:04.491 [2024-12-09 23:16:42.878276] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:35:04.491 [2024-12-09 23:16:42.878286] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:35:04.491 [2024-12-09 23:16:42.878293] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:35:04.491 [2024-12-09 23:16:42.878303] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:35:04.491 [2024-12-09 23:16:42.878309] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:35:04.491 [2024-12-09 23:16:42.878318] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:35:04.491 [2024-12-09 23:16:42.878325] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:35:04.491 [2024-12-09 23:16:42.878333] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:35:04.491 [2024-12-09 23:16:42.878340] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:35:04.491 [2024-12-09 23:16:42.878352] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:35:04.491 [2024-12-09 23:16:42.878360] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:35:04.491 [2024-12-09 23:16:42.878369] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:35:04.491 [2024-12-09 23:16:42.878376] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:35:04.491 [2024-12-09 23:16:42.878385] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:35:04.491 [2024-12-09 23:16:42.878393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:04.491 [2024-12-09 23:16:42.878402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:35:04.491 [2024-12-09 23:16:42.878409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.672 ms 00:35:04.491 [2024-12-09 23:16:42.878417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:04.491 [2024-12-09 23:16:42.878489] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:35:04.491 [2024-12-09 23:16:42.878503] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:35:07.016 [2024-12-09 23:16:45.082362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:07.016 [2024-12-09 23:16:45.082419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:35:07.016 [2024-12-09 23:16:45.082435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2203.858 ms 00:35:07.016 [2024-12-09 23:16:45.082445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:07.016 [2024-12-09 23:16:45.107486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:07.016 [2024-12-09 23:16:45.107530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:35:07.016 [2024-12-09 23:16:45.107543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.802 ms 00:35:07.016 [2024-12-09 23:16:45.107552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:07.016 [2024-12-09 23:16:45.107700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:07.016 [2024-12-09 23:16:45.107712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:35:07.016 [2024-12-09 23:16:45.107739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:35:07.016 [2024-12-09 23:16:45.107750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:07.016 [2024-12-09 23:16:45.148535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:07.016 [2024-12-09 23:16:45.148583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:35:07.016 [2024-12-09 23:16:45.148596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.746 ms 00:35:07.016 [2024-12-09 23:16:45.148605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:07.016 [2024-12-09 23:16:45.148690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:07.016 [2024-12-09 23:16:45.148703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:35:07.016 [2024-12-09 23:16:45.148712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:35:07.016 [2024-12-09 23:16:45.148721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:07.016 [2024-12-09 23:16:45.149028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:07.016 [2024-12-09 23:16:45.149048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:35:07.016 [2024-12-09 23:16:45.149057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.281 ms 00:35:07.016 [2024-12-09 23:16:45.149065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:07.016 [2024-12-09 23:16:45.149187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:07.016 [2024-12-09 23:16:45.149198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:35:07.016 [2024-12-09 23:16:45.149234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:35:07.016 [2024-12-09 23:16:45.149246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:07.016 [2024-12-09 23:16:45.163294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:07.016 [2024-12-09 23:16:45.163329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:35:07.016 [2024-12-09 23:16:45.163339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.024 ms 00:35:07.016 [2024-12-09 23:16:45.163349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:07.016 [2024-12-09 23:16:45.174505] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:35:07.016 [2024-12-09 23:16:45.188056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:07.016 [2024-12-09 23:16:45.188093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:35:07.016 [2024-12-09 23:16:45.188106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.609 ms 00:35:07.016 [2024-12-09 23:16:45.188113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:07.016 [2024-12-09 23:16:45.249414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:07.016 [2024-12-09 23:16:45.249471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:35:07.016 [2024-12-09 23:16:45.249488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 61.234 ms 00:35:07.016 [2024-12-09 23:16:45.249496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:07.016 [2024-12-09 23:16:45.249713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:07.016 [2024-12-09 23:16:45.249725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:35:07.016 [2024-12-09 23:16:45.249737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.141 ms 00:35:07.016 [2024-12-09 23:16:45.249745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:07.016 [2024-12-09 23:16:45.273277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:07.016 [2024-12-09 23:16:45.273342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:35:07.016 [2024-12-09 23:16:45.273357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.498 ms 00:35:07.016 [2024-12-09 23:16:45.273366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:07.016 [2024-12-09 23:16:45.296208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:07.016 [2024-12-09 23:16:45.296251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:35:07.016 [2024-12-09 23:16:45.296265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.771 ms 00:35:07.016 [2024-12-09 23:16:45.296273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:07.016 [2024-12-09 23:16:45.296864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:07.016 [2024-12-09 23:16:45.296888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:35:07.016 [2024-12-09 23:16:45.296899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.527 ms 00:35:07.016 [2024-12-09 23:16:45.296906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:07.016 [2024-12-09 23:16:45.362761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:07.016 [2024-12-09 23:16:45.362808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:35:07.016 [2024-12-09 23:16:45.362825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.820 ms 00:35:07.017 [2024-12-09 23:16:45.362833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:07.017 [2024-12-09 23:16:45.387336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:07.017 [2024-12-09 23:16:45.387376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:35:07.017 [2024-12-09 23:16:45.387389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.421 ms 00:35:07.017 [2024-12-09 23:16:45.387398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:07.017 [2024-12-09 23:16:45.410680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:07.017 [2024-12-09 23:16:45.410719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:35:07.017 [2024-12-09 23:16:45.410732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.236 ms 00:35:07.017 [2024-12-09 23:16:45.410740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:07.017 [2024-12-09 23:16:45.434102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:07.017 [2024-12-09 23:16:45.434152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:35:07.017 [2024-12-09 23:16:45.434165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.304 ms 00:35:07.017 [2024-12-09 23:16:45.434173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:07.017 [2024-12-09 23:16:45.434231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:07.017 [2024-12-09 23:16:45.434243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:35:07.017 [2024-12-09 23:16:45.434255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:35:07.017 [2024-12-09 23:16:45.434262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:07.017 [2024-12-09 23:16:45.434341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:07.017 [2024-12-09 23:16:45.434351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:35:07.017 [2024-12-09 23:16:45.434360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:35:07.017 [2024-12-09 23:16:45.434367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:07.017 [2024-12-09 23:16:45.435196] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:35:07.017 [2024-12-09 23:16:45.438333] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2576.642 ms, result 0 00:35:07.017 [2024-12-09 23:16:45.438892] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:35:07.017 { 00:35:07.017 "name": "ftl0", 00:35:07.017 "uuid": "8d08cbd7-a528-4f3d-b495-445a47785ac7" 00:35:07.017 } 00:35:07.017 23:16:45 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:35:07.017 23:16:45 ftl.ftl_trim -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:35:07.017 23:16:45 ftl.ftl_trim -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:35:07.017 23:16:45 ftl.ftl_trim -- common/autotest_common.sh@905 -- # local i 00:35:07.017 23:16:45 ftl.ftl_trim -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:35:07.017 23:16:45 ftl.ftl_trim -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:35:07.017 23:16:45 ftl.ftl_trim -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:35:07.274 23:16:45 ftl.ftl_trim -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:35:07.531 [ 00:35:07.531 { 00:35:07.531 "name": "ftl0", 00:35:07.531 "aliases": [ 00:35:07.531 "8d08cbd7-a528-4f3d-b495-445a47785ac7" 00:35:07.531 ], 00:35:07.531 "product_name": "FTL disk", 00:35:07.531 "block_size": 4096, 00:35:07.531 "num_blocks": 23592960, 00:35:07.531 "uuid": "8d08cbd7-a528-4f3d-b495-445a47785ac7", 00:35:07.531 "assigned_rate_limits": { 00:35:07.531 "rw_ios_per_sec": 0, 00:35:07.531 "rw_mbytes_per_sec": 0, 00:35:07.532 "r_mbytes_per_sec": 0, 00:35:07.532 "w_mbytes_per_sec": 0 00:35:07.532 }, 00:35:07.532 "claimed": false, 00:35:07.532 "zoned": false, 00:35:07.532 "supported_io_types": { 00:35:07.532 "read": true, 00:35:07.532 "write": true, 00:35:07.532 "unmap": true, 00:35:07.532 "flush": true, 00:35:07.532 "reset": false, 00:35:07.532 "nvme_admin": false, 00:35:07.532 "nvme_io": false, 00:35:07.532 "nvme_io_md": false, 00:35:07.532 "write_zeroes": true, 00:35:07.532 "zcopy": false, 00:35:07.532 "get_zone_info": false, 00:35:07.532 "zone_management": false, 00:35:07.532 "zone_append": false, 00:35:07.532 "compare": false, 00:35:07.532 "compare_and_write": false, 00:35:07.532 "abort": false, 00:35:07.532 "seek_hole": false, 00:35:07.532 "seek_data": false, 00:35:07.532 "copy": false, 00:35:07.532 "nvme_iov_md": false 00:35:07.532 }, 00:35:07.532 "driver_specific": { 00:35:07.532 "ftl": { 00:35:07.532 "base_bdev": "df5faf09-1dbd-4dfd-ad51-fab2e0087fbb", 00:35:07.532 "cache": "nvc0n1p0" 00:35:07.532 } 00:35:07.532 } 00:35:07.532 } 00:35:07.532 ] 00:35:07.532 23:16:45 ftl.ftl_trim -- common/autotest_common.sh@911 -- # return 0 00:35:07.532 23:16:45 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:35:07.532 23:16:45 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:35:07.789 23:16:46 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:35:07.789 23:16:46 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:35:08.046 23:16:46 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:35:08.046 { 00:35:08.046 "name": "ftl0", 00:35:08.046 "aliases": [ 00:35:08.046 "8d08cbd7-a528-4f3d-b495-445a47785ac7" 00:35:08.046 ], 00:35:08.046 "product_name": "FTL disk", 00:35:08.046 "block_size": 4096, 00:35:08.046 "num_blocks": 23592960, 00:35:08.046 "uuid": "8d08cbd7-a528-4f3d-b495-445a47785ac7", 00:35:08.046 "assigned_rate_limits": { 00:35:08.046 "rw_ios_per_sec": 0, 00:35:08.046 "rw_mbytes_per_sec": 0, 00:35:08.046 "r_mbytes_per_sec": 0, 00:35:08.046 "w_mbytes_per_sec": 0 00:35:08.046 }, 00:35:08.046 "claimed": false, 00:35:08.046 "zoned": false, 00:35:08.046 "supported_io_types": { 00:35:08.046 "read": true, 00:35:08.046 "write": true, 00:35:08.046 "unmap": true, 00:35:08.046 "flush": true, 00:35:08.046 "reset": false, 00:35:08.046 "nvme_admin": false, 00:35:08.046 "nvme_io": false, 00:35:08.046 "nvme_io_md": false, 00:35:08.046 "write_zeroes": true, 00:35:08.046 "zcopy": false, 00:35:08.046 "get_zone_info": false, 00:35:08.046 "zone_management": false, 00:35:08.046 "zone_append": false, 00:35:08.046 "compare": false, 00:35:08.046 "compare_and_write": false, 00:35:08.046 "abort": false, 00:35:08.046 "seek_hole": false, 00:35:08.046 "seek_data": false, 00:35:08.047 "copy": false, 00:35:08.047 "nvme_iov_md": false 00:35:08.047 }, 00:35:08.047 "driver_specific": { 00:35:08.047 "ftl": { 00:35:08.047 "base_bdev": "df5faf09-1dbd-4dfd-ad51-fab2e0087fbb", 00:35:08.047 "cache": "nvc0n1p0" 00:35:08.047 } 00:35:08.047 } 00:35:08.047 } 00:35:08.047 ]' 00:35:08.047 23:16:46 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:35:08.047 23:16:46 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:35:08.047 23:16:46 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:35:08.047 [2024-12-09 23:16:46.478432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:08.047 [2024-12-09 23:16:46.478502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:35:08.047 [2024-12-09 23:16:46.478520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:35:08.047 [2024-12-09 23:16:46.478534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:08.047 [2024-12-09 23:16:46.478573] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:35:08.047 [2024-12-09 23:16:46.481358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:08.047 [2024-12-09 23:16:46.481392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:35:08.047 [2024-12-09 23:16:46.481411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.762 ms 00:35:08.047 [2024-12-09 23:16:46.481420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:08.047 [2024-12-09 23:16:46.481956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:08.047 [2024-12-09 23:16:46.481978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:35:08.047 [2024-12-09 23:16:46.481990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.480 ms 00:35:08.047 [2024-12-09 23:16:46.481998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:08.047 [2024-12-09 23:16:46.485657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:08.047 [2024-12-09 23:16:46.485681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:35:08.047 [2024-12-09 23:16:46.485692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.629 ms 00:35:08.047 [2024-12-09 23:16:46.485701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:08.047 [2024-12-09 23:16:46.492764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:08.047 [2024-12-09 23:16:46.492793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:35:08.047 [2024-12-09 23:16:46.492805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.009 ms 00:35:08.047 [2024-12-09 23:16:46.492813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:08.305 [2024-12-09 23:16:46.517370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:08.305 [2024-12-09 23:16:46.517405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:35:08.305 [2024-12-09 23:16:46.517421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.464 ms 00:35:08.305 [2024-12-09 23:16:46.517428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:08.305 [2024-12-09 23:16:46.532859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:08.305 [2024-12-09 23:16:46.532892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:35:08.305 [2024-12-09 23:16:46.532904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.362 ms 00:35:08.305 [2024-12-09 23:16:46.532915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:08.305 [2024-12-09 23:16:46.533136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:08.305 [2024-12-09 23:16:46.533149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:35:08.305 [2024-12-09 23:16:46.533160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.130 ms 00:35:08.305 [2024-12-09 23:16:46.533167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:08.305 [2024-12-09 23:16:46.556795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:08.305 [2024-12-09 23:16:46.556826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:35:08.305 [2024-12-09 23:16:46.556840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.593 ms 00:35:08.305 [2024-12-09 23:16:46.556848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:08.305 [2024-12-09 23:16:46.580301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:08.305 [2024-12-09 23:16:46.580332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:35:08.305 [2024-12-09 23:16:46.580346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.387 ms 00:35:08.305 [2024-12-09 23:16:46.580353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:08.305 [2024-12-09 23:16:46.602997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:08.305 [2024-12-09 23:16:46.603029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:35:08.305 [2024-12-09 23:16:46.603041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.575 ms 00:35:08.305 [2024-12-09 23:16:46.603049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:08.305 [2024-12-09 23:16:46.625792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:08.305 [2024-12-09 23:16:46.625822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:35:08.305 [2024-12-09 23:16:46.625834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.633 ms 00:35:08.305 [2024-12-09 23:16:46.625841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:08.305 [2024-12-09 23:16:46.625905] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:35:08.305 [2024-12-09 23:16:46.625922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:35:08.305 [2024-12-09 23:16:46.625935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:35:08.305 [2024-12-09 23:16:46.625943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:35:08.305 [2024-12-09 23:16:46.625953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:35:08.305 [2024-12-09 23:16:46.625961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:35:08.306 [2024-12-09 23:16:46.625973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:35:08.306 [2024-12-09 23:16:46.625980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:35:08.306 [2024-12-09 23:16:46.625989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:35:08.306 [2024-12-09 23:16:46.625997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:35:08.306 [2024-12-09 23:16:46.626007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:35:08.306 [2024-12-09 23:16:46.626015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:35:08.306 [2024-12-09 23:16:46.626025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:35:08.306 [2024-12-09 23:16:46.626032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:35:08.306 [2024-12-09 23:16:46.626042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:35:08.306 [2024-12-09 23:16:46.626049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:35:08.306 [2024-12-09 23:16:46.626058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:35:08.306 [2024-12-09 23:16:46.626066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:35:08.306 [2024-12-09 23:16:46.626075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:35:08.306 [2024-12-09 23:16:46.626083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:35:08.306 [2024-12-09 23:16:46.626108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:35:08.306 [2024-12-09 23:16:46.626116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:35:08.306 [2024-12-09 23:16:46.626127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:35:08.306 [2024-12-09 23:16:46.626135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:35:08.306 [2024-12-09 23:16:46.626144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:35:08.306 [2024-12-09 23:16:46.626151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:35:08.306 [2024-12-09 23:16:46.626161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:35:08.306 [2024-12-09 23:16:46.626168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:35:08.306 [2024-12-09 23:16:46.626177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:35:08.306 [2024-12-09 23:16:46.626185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:35:08.306 [2024-12-09 23:16:46.626196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:35:08.306 [2024-12-09 23:16:46.626203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:35:08.306 [2024-12-09 23:16:46.626212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:35:08.306 [2024-12-09 23:16:46.626232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:35:08.306 [2024-12-09 23:16:46.626242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:35:08.306 [2024-12-09 23:16:46.626251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:35:08.306 [2024-12-09 23:16:46.626262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:35:08.306 [2024-12-09 23:16:46.626269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:35:08.306 [2024-12-09 23:16:46.626280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:35:08.306 [2024-12-09 23:16:46.626288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:35:08.306 [2024-12-09 23:16:46.626297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:35:08.306 [2024-12-09 23:16:46.626305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:35:08.306 [2024-12-09 23:16:46.626314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:35:08.306 [2024-12-09 23:16:46.626321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:35:08.306 [2024-12-09 23:16:46.626330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:35:08.306 [2024-12-09 23:16:46.626338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:35:08.306 [2024-12-09 23:16:46.626349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:35:08.306 [2024-12-09 23:16:46.626356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:35:08.306 [2024-12-09 23:16:46.626366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:35:08.306 [2024-12-09 23:16:46.626373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:35:08.306 [2024-12-09 23:16:46.626382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:35:08.306 [2024-12-09 23:16:46.626390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:35:08.306 [2024-12-09 23:16:46.626399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:35:08.306 [2024-12-09 23:16:46.626406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:35:08.306 [2024-12-09 23:16:46.626417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:35:08.306 [2024-12-09 23:16:46.626425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:35:08.306 [2024-12-09 23:16:46.626434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:35:08.306 [2024-12-09 23:16:46.626442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:35:08.306 [2024-12-09 23:16:46.626451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:35:08.306 [2024-12-09 23:16:46.626458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:35:08.306 [2024-12-09 23:16:46.626468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:35:08.306 [2024-12-09 23:16:46.626475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:35:08.306 [2024-12-09 23:16:46.626483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:35:08.306 [2024-12-09 23:16:46.626490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:35:08.306 [2024-12-09 23:16:46.626500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:35:08.306 [2024-12-09 23:16:46.626507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:35:08.306 [2024-12-09 23:16:46.626516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:35:08.306 [2024-12-09 23:16:46.626525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:35:08.306 [2024-12-09 23:16:46.626534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:35:08.306 [2024-12-09 23:16:46.626542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:35:08.306 [2024-12-09 23:16:46.626552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:35:08.306 [2024-12-09 23:16:46.626559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:35:08.306 [2024-12-09 23:16:46.626569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:35:08.306 [2024-12-09 23:16:46.626577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:35:08.306 [2024-12-09 23:16:46.626586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:35:08.306 [2024-12-09 23:16:46.626593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:35:08.306 [2024-12-09 23:16:46.626602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:35:08.306 [2024-12-09 23:16:46.626609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:35:08.306 [2024-12-09 23:16:46.626618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:35:08.306 [2024-12-09 23:16:46.626625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:35:08.306 [2024-12-09 23:16:46.626635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:35:08.306 [2024-12-09 23:16:46.626642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:35:08.306 [2024-12-09 23:16:46.626651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:35:08.306 [2024-12-09 23:16:46.626658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:35:08.306 [2024-12-09 23:16:46.626667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:35:08.306 [2024-12-09 23:16:46.626674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:35:08.306 [2024-12-09 23:16:46.626685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:35:08.306 [2024-12-09 23:16:46.626692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:35:08.306 [2024-12-09 23:16:46.626701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:35:08.306 [2024-12-09 23:16:46.626708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:35:08.306 [2024-12-09 23:16:46.626716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:35:08.306 [2024-12-09 23:16:46.626723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:35:08.306 [2024-12-09 23:16:46.626732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:35:08.306 [2024-12-09 23:16:46.626739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:35:08.306 [2024-12-09 23:16:46.626747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:35:08.306 [2024-12-09 23:16:46.626755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:35:08.306 [2024-12-09 23:16:46.626764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:35:08.306 [2024-12-09 23:16:46.626771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:35:08.307 [2024-12-09 23:16:46.626787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:35:08.307 [2024-12-09 23:16:46.626795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:35:08.307 [2024-12-09 23:16:46.626805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:35:08.307 [2024-12-09 23:16:46.626821] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:35:08.307 [2024-12-09 23:16:46.626833] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 8d08cbd7-a528-4f3d-b495-445a47785ac7 00:35:08.307 [2024-12-09 23:16:46.626840] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:35:08.307 [2024-12-09 23:16:46.626849] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:35:08.307 [2024-12-09 23:16:46.626856] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:35:08.307 [2024-12-09 23:16:46.626868] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:35:08.307 [2024-12-09 23:16:46.626875] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:35:08.307 [2024-12-09 23:16:46.626884] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:35:08.307 [2024-12-09 23:16:46.626891] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:35:08.307 [2024-12-09 23:16:46.626899] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:35:08.307 [2024-12-09 23:16:46.626905] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:35:08.307 [2024-12-09 23:16:46.626914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:08.307 [2024-12-09 23:16:46.626921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:35:08.307 [2024-12-09 23:16:46.626931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.012 ms 00:35:08.307 [2024-12-09 23:16:46.626938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:08.307 [2024-12-09 23:16:46.639910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:08.307 [2024-12-09 23:16:46.639941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:35:08.307 [2024-12-09 23:16:46.639956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.930 ms 00:35:08.307 [2024-12-09 23:16:46.639964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:08.307 [2024-12-09 23:16:46.640406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:08.307 [2024-12-09 23:16:46.640419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:35:08.307 [2024-12-09 23:16:46.640430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.367 ms 00:35:08.307 [2024-12-09 23:16:46.640437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:08.307 [2024-12-09 23:16:46.687164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:08.307 [2024-12-09 23:16:46.687213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:35:08.307 [2024-12-09 23:16:46.687235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:08.307 [2024-12-09 23:16:46.687244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:08.307 [2024-12-09 23:16:46.687355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:08.307 [2024-12-09 23:16:46.687365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:35:08.307 [2024-12-09 23:16:46.687376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:08.307 [2024-12-09 23:16:46.687383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:08.307 [2024-12-09 23:16:46.687451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:08.307 [2024-12-09 23:16:46.687462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:35:08.307 [2024-12-09 23:16:46.687476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:08.307 [2024-12-09 23:16:46.687484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:08.307 [2024-12-09 23:16:46.687510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:08.307 [2024-12-09 23:16:46.687518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:35:08.307 [2024-12-09 23:16:46.687527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:08.307 [2024-12-09 23:16:46.687535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:08.568 [2024-12-09 23:16:46.772894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:08.568 [2024-12-09 23:16:46.772953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:35:08.568 [2024-12-09 23:16:46.772967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:08.568 [2024-12-09 23:16:46.772976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:08.568 [2024-12-09 23:16:46.839271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:08.568 [2024-12-09 23:16:46.839322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:35:08.568 [2024-12-09 23:16:46.839336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:08.568 [2024-12-09 23:16:46.839345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:08.568 [2024-12-09 23:16:46.839462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:08.568 [2024-12-09 23:16:46.839472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:35:08.568 [2024-12-09 23:16:46.839484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:08.568 [2024-12-09 23:16:46.839495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:08.568 [2024-12-09 23:16:46.839549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:08.568 [2024-12-09 23:16:46.839558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:35:08.568 [2024-12-09 23:16:46.839568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:08.568 [2024-12-09 23:16:46.839575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:08.568 [2024-12-09 23:16:46.839694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:08.568 [2024-12-09 23:16:46.839705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:35:08.568 [2024-12-09 23:16:46.839714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:08.568 [2024-12-09 23:16:46.839725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:08.568 [2024-12-09 23:16:46.839784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:08.568 [2024-12-09 23:16:46.839793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:35:08.568 [2024-12-09 23:16:46.839802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:08.568 [2024-12-09 23:16:46.839811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:08.568 [2024-12-09 23:16:46.839864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:08.568 [2024-12-09 23:16:46.839874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:35:08.568 [2024-12-09 23:16:46.839885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:08.568 [2024-12-09 23:16:46.839893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:08.568 [2024-12-09 23:16:46.839954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:08.568 [2024-12-09 23:16:46.839964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:35:08.568 [2024-12-09 23:16:46.839974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:08.568 [2024-12-09 23:16:46.839981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:08.568 [2024-12-09 23:16:46.840184] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 361.733 ms, result 0 00:35:08.568 true 00:35:08.568 23:16:46 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 76609 00:35:08.568 23:16:46 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 76609 ']' 00:35:08.568 23:16:46 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 76609 00:35:08.568 23:16:46 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:35:08.568 23:16:46 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:08.568 23:16:46 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76609 00:35:08.568 killing process with pid 76609 00:35:08.568 23:16:46 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:08.568 23:16:46 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:08.568 23:16:46 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76609' 00:35:08.568 23:16:46 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 76609 00:35:08.568 23:16:46 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 76609 00:35:16.693 23:16:55 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:35:18.074 65536+0 records in 00:35:18.074 65536+0 records out 00:35:18.074 268435456 bytes (268 MB, 256 MiB) copied, 1.06616 s, 252 MB/s 00:35:18.074 23:16:56 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:35:18.074 [2024-12-09 23:16:56.215936] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:35:18.074 [2024-12-09 23:16:56.216049] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76788 ] 00:35:18.074 [2024-12-09 23:16:56.379540] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:18.074 [2024-12-09 23:16:56.478502] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:18.335 [2024-12-09 23:16:56.737538] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:35:18.335 [2024-12-09 23:16:56.737607] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:35:18.596 [2024-12-09 23:16:56.891352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:18.596 [2024-12-09 23:16:56.891412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:35:18.596 [2024-12-09 23:16:56.891425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:35:18.596 [2024-12-09 23:16:56.891434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:18.596 [2024-12-09 23:16:56.894178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:18.596 [2024-12-09 23:16:56.894231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:35:18.596 [2024-12-09 23:16:56.894242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.726 ms 00:35:18.596 [2024-12-09 23:16:56.894250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:18.596 [2024-12-09 23:16:56.894632] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:35:18.596 [2024-12-09 23:16:56.895474] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:35:18.596 [2024-12-09 23:16:56.895508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:18.596 [2024-12-09 23:16:56.895518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:35:18.596 [2024-12-09 23:16:56.895528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.892 ms 00:35:18.596 [2024-12-09 23:16:56.895536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:18.596 [2024-12-09 23:16:56.896737] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:35:18.596 [2024-12-09 23:16:56.909013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:18.596 [2024-12-09 23:16:56.909050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:35:18.596 [2024-12-09 23:16:56.909061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.277 ms 00:35:18.596 [2024-12-09 23:16:56.909070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:18.596 [2024-12-09 23:16:56.909161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:18.596 [2024-12-09 23:16:56.909173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:35:18.596 [2024-12-09 23:16:56.909182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:35:18.596 [2024-12-09 23:16:56.909189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:18.596 [2024-12-09 23:16:56.914073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:18.596 [2024-12-09 23:16:56.914107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:35:18.596 [2024-12-09 23:16:56.914117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.831 ms 00:35:18.596 [2024-12-09 23:16:56.914124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:18.596 [2024-12-09 23:16:56.914210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:18.596 [2024-12-09 23:16:56.914236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:35:18.596 [2024-12-09 23:16:56.914248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:35:18.596 [2024-12-09 23:16:56.914260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:18.596 [2024-12-09 23:16:56.914297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:18.596 [2024-12-09 23:16:56.914307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:35:18.596 [2024-12-09 23:16:56.914315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:35:18.596 [2024-12-09 23:16:56.914322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:18.596 [2024-12-09 23:16:56.914342] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:35:18.596 [2024-12-09 23:16:56.917597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:18.596 [2024-12-09 23:16:56.917626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:35:18.596 [2024-12-09 23:16:56.917635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.259 ms 00:35:18.596 [2024-12-09 23:16:56.917642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:18.596 [2024-12-09 23:16:56.917679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:18.597 [2024-12-09 23:16:56.917688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:35:18.597 [2024-12-09 23:16:56.917696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:35:18.597 [2024-12-09 23:16:56.917703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:18.597 [2024-12-09 23:16:56.917723] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:35:18.597 [2024-12-09 23:16:56.917743] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:35:18.597 [2024-12-09 23:16:56.917790] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:35:18.597 [2024-12-09 23:16:56.917808] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:35:18.597 [2024-12-09 23:16:56.917924] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:35:18.597 [2024-12-09 23:16:56.917946] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:35:18.597 [2024-12-09 23:16:56.917958] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:35:18.597 [2024-12-09 23:16:56.917971] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:35:18.597 [2024-12-09 23:16:56.917980] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:35:18.597 [2024-12-09 23:16:56.917990] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:35:18.597 [2024-12-09 23:16:56.918001] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:35:18.597 [2024-12-09 23:16:56.918013] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:35:18.597 [2024-12-09 23:16:56.918024] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:35:18.597 [2024-12-09 23:16:56.918036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:18.597 [2024-12-09 23:16:56.918043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:35:18.597 [2024-12-09 23:16:56.918051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.315 ms 00:35:18.597 [2024-12-09 23:16:56.918059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:18.597 [2024-12-09 23:16:56.918160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:18.597 [2024-12-09 23:16:56.918185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:35:18.597 [2024-12-09 23:16:56.918193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:35:18.597 [2024-12-09 23:16:56.918201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:18.597 [2024-12-09 23:16:56.918347] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:35:18.597 [2024-12-09 23:16:56.918366] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:35:18.597 [2024-12-09 23:16:56.918375] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:35:18.597 [2024-12-09 23:16:56.918386] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:18.597 [2024-12-09 23:16:56.918397] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:35:18.597 [2024-12-09 23:16:56.918405] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:35:18.597 [2024-12-09 23:16:56.918416] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:35:18.597 [2024-12-09 23:16:56.918426] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:35:18.597 [2024-12-09 23:16:56.918438] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:35:18.597 [2024-12-09 23:16:56.918448] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:35:18.597 [2024-12-09 23:16:56.918455] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:35:18.597 [2024-12-09 23:16:56.918467] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:35:18.597 [2024-12-09 23:16:56.918474] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:35:18.597 [2024-12-09 23:16:56.918481] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:35:18.597 [2024-12-09 23:16:56.918491] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:35:18.597 [2024-12-09 23:16:56.918502] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:18.597 [2024-12-09 23:16:56.918509] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:35:18.597 [2024-12-09 23:16:56.918516] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:35:18.597 [2024-12-09 23:16:56.918522] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:18.597 [2024-12-09 23:16:56.918529] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:35:18.597 [2024-12-09 23:16:56.918535] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:35:18.597 [2024-12-09 23:16:56.918544] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:35:18.597 [2024-12-09 23:16:56.918553] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:35:18.597 [2024-12-09 23:16:56.918565] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:35:18.597 [2024-12-09 23:16:56.918572] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:35:18.597 [2024-12-09 23:16:56.918578] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:35:18.597 [2024-12-09 23:16:56.918587] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:35:18.597 [2024-12-09 23:16:56.918598] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:35:18.597 [2024-12-09 23:16:56.918605] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:35:18.597 [2024-12-09 23:16:56.918612] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:35:18.597 [2024-12-09 23:16:56.918618] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:35:18.597 [2024-12-09 23:16:56.918624] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:35:18.597 [2024-12-09 23:16:56.918631] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:35:18.597 [2024-12-09 23:16:56.918637] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:35:18.597 [2024-12-09 23:16:56.918643] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:35:18.597 [2024-12-09 23:16:56.918649] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:35:18.597 [2024-12-09 23:16:56.918655] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:35:18.597 [2024-12-09 23:16:56.918663] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:35:18.597 [2024-12-09 23:16:56.918674] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:35:18.597 [2024-12-09 23:16:56.918681] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:18.597 [2024-12-09 23:16:56.918688] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:35:18.597 [2024-12-09 23:16:56.918694] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:35:18.597 [2024-12-09 23:16:56.918704] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:18.597 [2024-12-09 23:16:56.918716] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:35:18.597 [2024-12-09 23:16:56.918728] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:35:18.597 [2024-12-09 23:16:56.918741] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:35:18.597 [2024-12-09 23:16:56.918747] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:18.597 [2024-12-09 23:16:56.918755] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:35:18.597 [2024-12-09 23:16:56.918762] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:35:18.597 [2024-12-09 23:16:56.918768] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:35:18.597 [2024-12-09 23:16:56.918774] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:35:18.597 [2024-12-09 23:16:56.918780] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:35:18.597 [2024-12-09 23:16:56.918787] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:35:18.597 [2024-12-09 23:16:56.918797] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:35:18.597 [2024-12-09 23:16:56.918810] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:35:18.597 [2024-12-09 23:16:56.918818] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:35:18.597 [2024-12-09 23:16:56.918826] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:35:18.597 [2024-12-09 23:16:56.918837] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:35:18.597 [2024-12-09 23:16:56.918848] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:35:18.597 [2024-12-09 23:16:56.918856] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:35:18.597 [2024-12-09 23:16:56.918863] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:35:18.597 [2024-12-09 23:16:56.918874] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:35:18.597 [2024-12-09 23:16:56.918884] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:35:18.597 [2024-12-09 23:16:56.918891] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:35:18.597 [2024-12-09 23:16:56.918898] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:35:18.597 [2024-12-09 23:16:56.918905] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:35:18.597 [2024-12-09 23:16:56.918914] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:35:18.597 [2024-12-09 23:16:56.918926] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:35:18.597 [2024-12-09 23:16:56.918934] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:35:18.597 [2024-12-09 23:16:56.918942] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:35:18.597 [2024-12-09 23:16:56.918950] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:35:18.597 [2024-12-09 23:16:56.918958] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:35:18.597 [2024-12-09 23:16:56.918965] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:35:18.597 [2024-12-09 23:16:56.918972] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:35:18.597 [2024-12-09 23:16:56.918979] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:35:18.597 [2024-12-09 23:16:56.918990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:18.598 [2024-12-09 23:16:56.919001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:35:18.598 [2024-12-09 23:16:56.919010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.712 ms 00:35:18.598 [2024-12-09 23:16:56.919020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:18.598 [2024-12-09 23:16:56.944564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:18.598 [2024-12-09 23:16:56.944600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:35:18.598 [2024-12-09 23:16:56.944611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.486 ms 00:35:18.598 [2024-12-09 23:16:56.944619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:18.598 [2024-12-09 23:16:56.944741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:18.598 [2024-12-09 23:16:56.944751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:35:18.598 [2024-12-09 23:16:56.944760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:35:18.598 [2024-12-09 23:16:56.944767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:18.598 [2024-12-09 23:16:56.987474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:18.598 [2024-12-09 23:16:56.987516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:35:18.598 [2024-12-09 23:16:56.987531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.686 ms 00:35:18.598 [2024-12-09 23:16:56.987539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:18.598 [2024-12-09 23:16:56.987635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:18.598 [2024-12-09 23:16:56.987647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:35:18.598 [2024-12-09 23:16:56.987656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:35:18.598 [2024-12-09 23:16:56.987663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:18.598 [2024-12-09 23:16:56.988004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:18.598 [2024-12-09 23:16:56.988038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:35:18.598 [2024-12-09 23:16:56.988059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.319 ms 00:35:18.598 [2024-12-09 23:16:56.988068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:18.598 [2024-12-09 23:16:56.988211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:18.598 [2024-12-09 23:16:56.988254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:35:18.598 [2024-12-09 23:16:56.988263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.111 ms 00:35:18.598 [2024-12-09 23:16:56.988271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:18.598 [2024-12-09 23:16:57.001458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:18.598 [2024-12-09 23:16:57.001489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:35:18.598 [2024-12-09 23:16:57.001499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.165 ms 00:35:18.598 [2024-12-09 23:16:57.001506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:18.598 [2024-12-09 23:16:57.013982] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:35:18.598 [2024-12-09 23:16:57.014019] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:35:18.598 [2024-12-09 23:16:57.014031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:18.598 [2024-12-09 23:16:57.014038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:35:18.598 [2024-12-09 23:16:57.014048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.421 ms 00:35:18.598 [2024-12-09 23:16:57.014054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:18.598 [2024-12-09 23:16:57.038391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:18.598 [2024-12-09 23:16:57.038436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:35:18.598 [2024-12-09 23:16:57.038447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.258 ms 00:35:18.598 [2024-12-09 23:16:57.038455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:18.598 [2024-12-09 23:16:57.049978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:18.598 [2024-12-09 23:16:57.050011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:35:18.598 [2024-12-09 23:16:57.050022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.436 ms 00:35:18.598 [2024-12-09 23:16:57.050029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:18.858 [2024-12-09 23:16:57.061146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:18.858 [2024-12-09 23:16:57.061181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:35:18.858 [2024-12-09 23:16:57.061191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.048 ms 00:35:18.859 [2024-12-09 23:16:57.061199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:18.859 [2024-12-09 23:16:57.061899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:18.859 [2024-12-09 23:16:57.061936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:35:18.859 [2024-12-09 23:16:57.061946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.598 ms 00:35:18.859 [2024-12-09 23:16:57.061954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:18.859 [2024-12-09 23:16:57.117066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:18.859 [2024-12-09 23:16:57.117123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:35:18.859 [2024-12-09 23:16:57.117136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.087 ms 00:35:18.859 [2024-12-09 23:16:57.117145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:18.859 [2024-12-09 23:16:57.127377] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:35:18.859 [2024-12-09 23:16:57.141601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:18.859 [2024-12-09 23:16:57.141647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:35:18.859 [2024-12-09 23:16:57.141661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.320 ms 00:35:18.859 [2024-12-09 23:16:57.141670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:18.859 [2024-12-09 23:16:57.141769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:18.859 [2024-12-09 23:16:57.141780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:35:18.859 [2024-12-09 23:16:57.141792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:35:18.859 [2024-12-09 23:16:57.141803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:18.859 [2024-12-09 23:16:57.141860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:18.859 [2024-12-09 23:16:57.141870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:35:18.859 [2024-12-09 23:16:57.141878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:35:18.859 [2024-12-09 23:16:57.141885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:18.859 [2024-12-09 23:16:57.141916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:18.859 [2024-12-09 23:16:57.141931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:35:18.859 [2024-12-09 23:16:57.141943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:35:18.859 [2024-12-09 23:16:57.141955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:18.859 [2024-12-09 23:16:57.141989] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:35:18.859 [2024-12-09 23:16:57.142000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:18.859 [2024-12-09 23:16:57.142008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:35:18.859 [2024-12-09 23:16:57.142015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:35:18.859 [2024-12-09 23:16:57.142022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:18.859 [2024-12-09 23:16:57.165092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:18.859 [2024-12-09 23:16:57.165133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:35:18.859 [2024-12-09 23:16:57.165146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.050 ms 00:35:18.859 [2024-12-09 23:16:57.165155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:18.859 [2024-12-09 23:16:57.165256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:18.859 [2024-12-09 23:16:57.165267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:35:18.859 [2024-12-09 23:16:57.165275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:35:18.859 [2024-12-09 23:16:57.165282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:18.859 [2024-12-09 23:16:57.166128] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:35:18.859 [2024-12-09 23:16:57.169038] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 274.495 ms, result 0 00:35:18.859 [2024-12-09 23:16:57.169666] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:35:18.859 [2024-12-09 23:16:57.182568] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:35:19.801  [2024-12-09T23:16:59.223Z] Copying: 36/256 [MB] (36 MBps) [2024-12-09T23:17:00.601Z] Copying: 62/256 [MB] (26 MBps) [2024-12-09T23:17:01.539Z] Copying: 84/256 [MB] (21 MBps) [2024-12-09T23:17:02.471Z] Copying: 104/256 [MB] (20 MBps) [2024-12-09T23:17:03.406Z] Copying: 123/256 [MB] (18 MBps) [2024-12-09T23:17:04.355Z] Copying: 155/256 [MB] (32 MBps) [2024-12-09T23:17:05.288Z] Copying: 203/256 [MB] (47 MBps) [2024-12-09T23:17:05.547Z] Copying: 251/256 [MB] (48 MBps) [2024-12-09T23:17:05.547Z] Copying: 256/256 [MB] (average 30 MBps)[2024-12-09 23:17:05.542298] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:35:27.345 [2024-12-09 23:17:05.551609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:27.345 [2024-12-09 23:17:05.551650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:35:27.345 [2024-12-09 23:17:05.551663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:35:27.345 [2024-12-09 23:17:05.551677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:27.345 [2024-12-09 23:17:05.551700] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:35:27.345 [2024-12-09 23:17:05.554356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:27.345 [2024-12-09 23:17:05.554388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:35:27.345 [2024-12-09 23:17:05.554399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.644 ms 00:35:27.345 [2024-12-09 23:17:05.554407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:27.345 [2024-12-09 23:17:05.556911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:27.345 [2024-12-09 23:17:05.556943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:35:27.345 [2024-12-09 23:17:05.556954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.482 ms 00:35:27.345 [2024-12-09 23:17:05.556961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:27.345 [2024-12-09 23:17:05.564488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:27.345 [2024-12-09 23:17:05.564527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:35:27.345 [2024-12-09 23:17:05.564537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.511 ms 00:35:27.345 [2024-12-09 23:17:05.564549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:27.345 [2024-12-09 23:17:05.571518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:27.345 [2024-12-09 23:17:05.571549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:35:27.345 [2024-12-09 23:17:05.571560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.925 ms 00:35:27.345 [2024-12-09 23:17:05.571569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:27.345 [2024-12-09 23:17:05.596028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:27.345 [2024-12-09 23:17:05.596065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:35:27.345 [2024-12-09 23:17:05.596077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.417 ms 00:35:27.345 [2024-12-09 23:17:05.596085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:27.345 [2024-12-09 23:17:05.609974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:27.345 [2024-12-09 23:17:05.610018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:35:27.345 [2024-12-09 23:17:05.610034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.850 ms 00:35:27.345 [2024-12-09 23:17:05.610041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:27.345 [2024-12-09 23:17:05.610185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:27.345 [2024-12-09 23:17:05.610197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:35:27.345 [2024-12-09 23:17:05.610210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:35:27.345 [2024-12-09 23:17:05.610243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:27.345 [2024-12-09 23:17:05.634337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:27.345 [2024-12-09 23:17:05.634382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:35:27.345 [2024-12-09 23:17:05.634396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.072 ms 00:35:27.345 [2024-12-09 23:17:05.634405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:27.345 [2024-12-09 23:17:05.659890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:27.345 [2024-12-09 23:17:05.659937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:35:27.345 [2024-12-09 23:17:05.659951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.441 ms 00:35:27.345 [2024-12-09 23:17:05.659959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:27.345 [2024-12-09 23:17:05.683736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:27.345 [2024-12-09 23:17:05.683776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:35:27.345 [2024-12-09 23:17:05.683788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.716 ms 00:35:27.345 [2024-12-09 23:17:05.683797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:27.345 [2024-12-09 23:17:05.708677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:27.345 [2024-12-09 23:17:05.708720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:35:27.345 [2024-12-09 23:17:05.708734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.813 ms 00:35:27.345 [2024-12-09 23:17:05.708743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:27.345 [2024-12-09 23:17:05.708784] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:35:27.345 [2024-12-09 23:17:05.708799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:35:27.345 [2024-12-09 23:17:05.708809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:35:27.345 [2024-12-09 23:17:05.708817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:35:27.345 [2024-12-09 23:17:05.708825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:35:27.345 [2024-12-09 23:17:05.708833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:35:27.345 [2024-12-09 23:17:05.708841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:35:27.345 [2024-12-09 23:17:05.708848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:35:27.345 [2024-12-09 23:17:05.708856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:35:27.345 [2024-12-09 23:17:05.708863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:35:27.345 [2024-12-09 23:17:05.708870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:35:27.345 [2024-12-09 23:17:05.708877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:35:27.345 [2024-12-09 23:17:05.708884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:35:27.345 [2024-12-09 23:17:05.708891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:35:27.345 [2024-12-09 23:17:05.708899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:35:27.345 [2024-12-09 23:17:05.708907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:35:27.345 [2024-12-09 23:17:05.708914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:35:27.345 [2024-12-09 23:17:05.708921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:35:27.345 [2024-12-09 23:17:05.708929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:35:27.345 [2024-12-09 23:17:05.708936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:35:27.345 [2024-12-09 23:17:05.708944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:35:27.345 [2024-12-09 23:17:05.708951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:35:27.345 [2024-12-09 23:17:05.708958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:35:27.345 [2024-12-09 23:17:05.708965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:35:27.345 [2024-12-09 23:17:05.708972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:35:27.345 [2024-12-09 23:17:05.708980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:35:27.345 [2024-12-09 23:17:05.708987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:35:27.346 [2024-12-09 23:17:05.708994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:35:27.346 [2024-12-09 23:17:05.709001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:35:27.346 [2024-12-09 23:17:05.709008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:35:27.346 [2024-12-09 23:17:05.709018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:35:27.346 [2024-12-09 23:17:05.709026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:35:27.346 [2024-12-09 23:17:05.709033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:35:27.346 [2024-12-09 23:17:05.709041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:35:27.346 [2024-12-09 23:17:05.709048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:35:27.346 [2024-12-09 23:17:05.709056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:35:27.346 [2024-12-09 23:17:05.709064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:35:27.346 [2024-12-09 23:17:05.709071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:35:27.346 [2024-12-09 23:17:05.709078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:35:27.346 [2024-12-09 23:17:05.709085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:35:27.346 [2024-12-09 23:17:05.709092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:35:27.346 [2024-12-09 23:17:05.709099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:35:27.346 [2024-12-09 23:17:05.709106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:35:27.346 [2024-12-09 23:17:05.709113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:35:27.346 [2024-12-09 23:17:05.709120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:35:27.346 [2024-12-09 23:17:05.709127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:35:27.346 [2024-12-09 23:17:05.709134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:35:27.346 [2024-12-09 23:17:05.709141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:35:27.346 [2024-12-09 23:17:05.709148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:35:27.346 [2024-12-09 23:17:05.709155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:35:27.346 [2024-12-09 23:17:05.709162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:35:27.346 [2024-12-09 23:17:05.709169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:35:27.346 [2024-12-09 23:17:05.709177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:35:27.346 [2024-12-09 23:17:05.709184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:35:27.346 [2024-12-09 23:17:05.709191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:35:27.346 [2024-12-09 23:17:05.709197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:35:27.346 [2024-12-09 23:17:05.709204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:35:27.346 [2024-12-09 23:17:05.709212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:35:27.346 [2024-12-09 23:17:05.709235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:35:27.346 [2024-12-09 23:17:05.709243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:35:27.346 [2024-12-09 23:17:05.709251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:35:27.346 [2024-12-09 23:17:05.709258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:35:27.346 [2024-12-09 23:17:05.709267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:35:27.346 [2024-12-09 23:17:05.709275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:35:27.346 [2024-12-09 23:17:05.709283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:35:27.346 [2024-12-09 23:17:05.709290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:35:27.346 [2024-12-09 23:17:05.709297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:35:27.346 [2024-12-09 23:17:05.709305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:35:27.346 [2024-12-09 23:17:05.709312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:35:27.346 [2024-12-09 23:17:05.709320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:35:27.346 [2024-12-09 23:17:05.709327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:35:27.346 [2024-12-09 23:17:05.709334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:35:27.346 [2024-12-09 23:17:05.709342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:35:27.346 [2024-12-09 23:17:05.709349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:35:27.346 [2024-12-09 23:17:05.709376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:35:27.346 [2024-12-09 23:17:05.709384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:35:27.346 [2024-12-09 23:17:05.709392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:35:27.346 [2024-12-09 23:17:05.709399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:35:27.346 [2024-12-09 23:17:05.709406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:35:27.346 [2024-12-09 23:17:05.709413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:35:27.346 [2024-12-09 23:17:05.709421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:35:27.346 [2024-12-09 23:17:05.709428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:35:27.346 [2024-12-09 23:17:05.709436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:35:27.346 [2024-12-09 23:17:05.709443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:35:27.346 [2024-12-09 23:17:05.709450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:35:27.346 [2024-12-09 23:17:05.709457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:35:27.346 [2024-12-09 23:17:05.709464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:35:27.346 [2024-12-09 23:17:05.709472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:35:27.346 [2024-12-09 23:17:05.709480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:35:27.346 [2024-12-09 23:17:05.709487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:35:27.346 [2024-12-09 23:17:05.709494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:35:27.346 [2024-12-09 23:17:05.709502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:35:27.346 [2024-12-09 23:17:05.709509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:35:27.346 [2024-12-09 23:17:05.709516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:35:27.346 [2024-12-09 23:17:05.709527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:35:27.346 [2024-12-09 23:17:05.709543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:35:27.346 [2024-12-09 23:17:05.709551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:35:27.346 [2024-12-09 23:17:05.709558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:35:27.346 [2024-12-09 23:17:05.709565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:35:27.346 [2024-12-09 23:17:05.709573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:35:27.346 [2024-12-09 23:17:05.709580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:35:27.346 [2024-12-09 23:17:05.709595] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:35:27.346 [2024-12-09 23:17:05.709603] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 8d08cbd7-a528-4f3d-b495-445a47785ac7 00:35:27.346 [2024-12-09 23:17:05.709611] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:35:27.346 [2024-12-09 23:17:05.709619] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:35:27.346 [2024-12-09 23:17:05.709626] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:35:27.346 [2024-12-09 23:17:05.709633] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:35:27.346 [2024-12-09 23:17:05.709640] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:35:27.346 [2024-12-09 23:17:05.709647] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:35:27.346 [2024-12-09 23:17:05.709654] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:35:27.346 [2024-12-09 23:17:05.709660] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:35:27.346 [2024-12-09 23:17:05.709667] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:35:27.346 [2024-12-09 23:17:05.709674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:27.346 [2024-12-09 23:17:05.709684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:35:27.346 [2024-12-09 23:17:05.709692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.890 ms 00:35:27.346 [2024-12-09 23:17:05.709699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:27.346 [2024-12-09 23:17:05.723032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:27.346 [2024-12-09 23:17:05.723071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:35:27.346 [2024-12-09 23:17:05.723083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.301 ms 00:35:27.346 [2024-12-09 23:17:05.723090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:27.346 [2024-12-09 23:17:05.723484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:27.346 [2024-12-09 23:17:05.723502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:35:27.346 [2024-12-09 23:17:05.723511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.351 ms 00:35:27.347 [2024-12-09 23:17:05.723518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:27.347 [2024-12-09 23:17:05.759894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:27.347 [2024-12-09 23:17:05.759941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:35:27.347 [2024-12-09 23:17:05.759954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:27.347 [2024-12-09 23:17:05.759964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:27.347 [2024-12-09 23:17:05.760077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:27.347 [2024-12-09 23:17:05.760088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:35:27.347 [2024-12-09 23:17:05.760098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:27.347 [2024-12-09 23:17:05.760106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:27.347 [2024-12-09 23:17:05.760158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:27.347 [2024-12-09 23:17:05.760172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:35:27.347 [2024-12-09 23:17:05.760181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:27.347 [2024-12-09 23:17:05.760190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:27.347 [2024-12-09 23:17:05.760210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:27.347 [2024-12-09 23:17:05.760249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:35:27.347 [2024-12-09 23:17:05.760259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:27.347 [2024-12-09 23:17:05.760267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:27.605 [2024-12-09 23:17:05.841195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:27.605 [2024-12-09 23:17:05.841272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:35:27.605 [2024-12-09 23:17:05.841285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:27.605 [2024-12-09 23:17:05.841294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:27.605 [2024-12-09 23:17:05.905907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:27.605 [2024-12-09 23:17:05.905964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:35:27.606 [2024-12-09 23:17:05.905976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:27.606 [2024-12-09 23:17:05.905984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:27.606 [2024-12-09 23:17:05.906059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:27.606 [2024-12-09 23:17:05.906068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:35:27.606 [2024-12-09 23:17:05.906076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:27.606 [2024-12-09 23:17:05.906084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:27.606 [2024-12-09 23:17:05.906112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:27.606 [2024-12-09 23:17:05.906120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:35:27.606 [2024-12-09 23:17:05.906133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:27.606 [2024-12-09 23:17:05.906140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:27.606 [2024-12-09 23:17:05.906245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:27.606 [2024-12-09 23:17:05.906256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:35:27.606 [2024-12-09 23:17:05.906264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:27.606 [2024-12-09 23:17:05.906271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:27.606 [2024-12-09 23:17:05.906301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:27.606 [2024-12-09 23:17:05.906310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:35:27.606 [2024-12-09 23:17:05.906318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:27.606 [2024-12-09 23:17:05.906328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:27.606 [2024-12-09 23:17:05.906364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:27.606 [2024-12-09 23:17:05.906373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:35:27.606 [2024-12-09 23:17:05.906380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:27.606 [2024-12-09 23:17:05.906387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:27.606 [2024-12-09 23:17:05.906428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:27.606 [2024-12-09 23:17:05.906437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:35:27.606 [2024-12-09 23:17:05.906447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:27.606 [2024-12-09 23:17:05.906454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:27.606 [2024-12-09 23:17:05.906584] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 354.965 ms, result 0 00:35:29.560 00:35:29.560 00:35:29.560 23:17:07 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=76907 00:35:29.560 23:17:07 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:35:29.560 23:17:07 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 76907 00:35:29.560 23:17:07 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 76907 ']' 00:35:29.560 23:17:07 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:29.560 23:17:07 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:29.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:29.560 23:17:07 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:29.560 23:17:07 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:29.560 23:17:07 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:35:29.560 [2024-12-09 23:17:07.709698] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:35:29.560 [2024-12-09 23:17:07.709831] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76907 ] 00:35:29.560 [2024-12-09 23:17:07.871353] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:29.560 [2024-12-09 23:17:07.991589] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:30.498 23:17:08 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:30.498 23:17:08 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:35:30.498 23:17:08 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:35:30.498 [2024-12-09 23:17:08.914914] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:35:30.498 [2024-12-09 23:17:08.914983] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:35:30.758 [2024-12-09 23:17:09.089631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:30.758 [2024-12-09 23:17:09.089686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:35:30.758 [2024-12-09 23:17:09.089701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:35:30.758 [2024-12-09 23:17:09.089709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:30.758 [2024-12-09 23:17:09.092327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:30.758 [2024-12-09 23:17:09.092359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:35:30.758 [2024-12-09 23:17:09.092370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.598 ms 00:35:30.758 [2024-12-09 23:17:09.092377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:30.759 [2024-12-09 23:17:09.092446] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:35:30.759 [2024-12-09 23:17:09.093146] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:35:30.759 [2024-12-09 23:17:09.093167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:30.759 [2024-12-09 23:17:09.093174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:35:30.759 [2024-12-09 23:17:09.093184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.730 ms 00:35:30.759 [2024-12-09 23:17:09.093191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:30.759 [2024-12-09 23:17:09.094316] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:35:30.759 [2024-12-09 23:17:09.107088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:30.759 [2024-12-09 23:17:09.107129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:35:30.759 [2024-12-09 23:17:09.107142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.776 ms 00:35:30.759 [2024-12-09 23:17:09.107151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:30.759 [2024-12-09 23:17:09.107244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:30.759 [2024-12-09 23:17:09.107257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:35:30.759 [2024-12-09 23:17:09.107265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:35:30.759 [2024-12-09 23:17:09.107275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:30.759 [2024-12-09 23:17:09.112173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:30.759 [2024-12-09 23:17:09.112211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:35:30.759 [2024-12-09 23:17:09.112233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.850 ms 00:35:30.759 [2024-12-09 23:17:09.112244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:30.759 [2024-12-09 23:17:09.112339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:30.759 [2024-12-09 23:17:09.112350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:35:30.759 [2024-12-09 23:17:09.112358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:35:30.759 [2024-12-09 23:17:09.112370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:30.759 [2024-12-09 23:17:09.112396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:30.759 [2024-12-09 23:17:09.112408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:35:30.759 [2024-12-09 23:17:09.112416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:35:30.759 [2024-12-09 23:17:09.112424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:30.759 [2024-12-09 23:17:09.112446] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:35:30.759 [2024-12-09 23:17:09.115658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:30.759 [2024-12-09 23:17:09.115688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:35:30.759 [2024-12-09 23:17:09.115698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.214 ms 00:35:30.759 [2024-12-09 23:17:09.115706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:30.759 [2024-12-09 23:17:09.115751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:30.759 [2024-12-09 23:17:09.115759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:35:30.759 [2024-12-09 23:17:09.115768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:35:30.759 [2024-12-09 23:17:09.115777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:30.759 [2024-12-09 23:17:09.115798] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:35:30.759 [2024-12-09 23:17:09.115816] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:35:30.759 [2024-12-09 23:17:09.115858] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:35:30.759 [2024-12-09 23:17:09.115873] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:35:30.759 [2024-12-09 23:17:09.115976] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:35:30.759 [2024-12-09 23:17:09.115987] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:35:30.759 [2024-12-09 23:17:09.116001] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:35:30.759 [2024-12-09 23:17:09.116010] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:35:30.759 [2024-12-09 23:17:09.116021] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:35:30.759 [2024-12-09 23:17:09.116029] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:35:30.759 [2024-12-09 23:17:09.116037] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:35:30.759 [2024-12-09 23:17:09.116044] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:35:30.759 [2024-12-09 23:17:09.116054] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:35:30.759 [2024-12-09 23:17:09.116061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:30.759 [2024-12-09 23:17:09.116070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:35:30.759 [2024-12-09 23:17:09.116078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.267 ms 00:35:30.759 [2024-12-09 23:17:09.116086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:30.759 [2024-12-09 23:17:09.116174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:30.759 [2024-12-09 23:17:09.116183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:35:30.759 [2024-12-09 23:17:09.116191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:35:30.759 [2024-12-09 23:17:09.116203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:30.759 [2024-12-09 23:17:09.116323] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:35:30.759 [2024-12-09 23:17:09.116336] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:35:30.759 [2024-12-09 23:17:09.116344] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:35:30.759 [2024-12-09 23:17:09.116353] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:30.759 [2024-12-09 23:17:09.116360] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:35:30.759 [2024-12-09 23:17:09.116370] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:35:30.759 [2024-12-09 23:17:09.116377] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:35:30.759 [2024-12-09 23:17:09.116387] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:35:30.759 [2024-12-09 23:17:09.116394] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:35:30.759 [2024-12-09 23:17:09.116402] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:35:30.759 [2024-12-09 23:17:09.116408] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:35:30.759 [2024-12-09 23:17:09.116416] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:35:30.759 [2024-12-09 23:17:09.116422] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:35:30.759 [2024-12-09 23:17:09.116431] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:35:30.759 [2024-12-09 23:17:09.116438] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:35:30.759 [2024-12-09 23:17:09.116447] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:30.759 [2024-12-09 23:17:09.116454] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:35:30.759 [2024-12-09 23:17:09.116462] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:35:30.759 [2024-12-09 23:17:09.116474] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:30.759 [2024-12-09 23:17:09.116483] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:35:30.759 [2024-12-09 23:17:09.116489] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:35:30.759 [2024-12-09 23:17:09.116497] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:35:30.759 [2024-12-09 23:17:09.116503] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:35:30.759 [2024-12-09 23:17:09.116513] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:35:30.759 [2024-12-09 23:17:09.116519] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:35:30.759 [2024-12-09 23:17:09.116527] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:35:30.759 [2024-12-09 23:17:09.116534] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:35:30.759 [2024-12-09 23:17:09.116542] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:35:30.759 [2024-12-09 23:17:09.116548] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:35:30.759 [2024-12-09 23:17:09.116564] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:35:30.759 [2024-12-09 23:17:09.116571] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:35:30.759 [2024-12-09 23:17:09.116579] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:35:30.759 [2024-12-09 23:17:09.116586] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:35:30.759 [2024-12-09 23:17:09.116594] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:35:30.759 [2024-12-09 23:17:09.116600] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:35:30.759 [2024-12-09 23:17:09.116608] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:35:30.759 [2024-12-09 23:17:09.116614] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:35:30.759 [2024-12-09 23:17:09.116622] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:35:30.759 [2024-12-09 23:17:09.116629] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:35:30.759 [2024-12-09 23:17:09.116639] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:30.759 [2024-12-09 23:17:09.116645] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:35:30.759 [2024-12-09 23:17:09.116653] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:35:30.759 [2024-12-09 23:17:09.116659] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:30.759 [2024-12-09 23:17:09.116667] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:35:30.759 [2024-12-09 23:17:09.116677] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:35:30.759 [2024-12-09 23:17:09.116685] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:35:30.759 [2024-12-09 23:17:09.116692] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:30.759 [2024-12-09 23:17:09.116702] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:35:30.759 [2024-12-09 23:17:09.116709] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:35:30.760 [2024-12-09 23:17:09.116718] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:35:30.760 [2024-12-09 23:17:09.116725] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:35:30.760 [2024-12-09 23:17:09.116733] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:35:30.760 [2024-12-09 23:17:09.116739] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:35:30.760 [2024-12-09 23:17:09.116749] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:35:30.760 [2024-12-09 23:17:09.116758] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:35:30.760 [2024-12-09 23:17:09.116770] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:35:30.760 [2024-12-09 23:17:09.116778] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:35:30.760 [2024-12-09 23:17:09.116786] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:35:30.760 [2024-12-09 23:17:09.116793] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:35:30.760 [2024-12-09 23:17:09.116802] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:35:30.760 [2024-12-09 23:17:09.116809] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:35:30.760 [2024-12-09 23:17:09.116819] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:35:30.760 [2024-12-09 23:17:09.116826] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:35:30.760 [2024-12-09 23:17:09.116834] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:35:30.760 [2024-12-09 23:17:09.116841] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:35:30.760 [2024-12-09 23:17:09.116849] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:35:30.760 [2024-12-09 23:17:09.116856] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:35:30.760 [2024-12-09 23:17:09.116864] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:35:30.760 [2024-12-09 23:17:09.116872] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:35:30.760 [2024-12-09 23:17:09.116880] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:35:30.760 [2024-12-09 23:17:09.116888] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:35:30.760 [2024-12-09 23:17:09.116899] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:35:30.760 [2024-12-09 23:17:09.116906] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:35:30.760 [2024-12-09 23:17:09.116917] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:35:30.760 [2024-12-09 23:17:09.116924] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:35:30.760 [2024-12-09 23:17:09.116933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:30.760 [2024-12-09 23:17:09.116940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:35:30.760 [2024-12-09 23:17:09.116948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.676 ms 00:35:30.760 [2024-12-09 23:17:09.116957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:30.760 [2024-12-09 23:17:09.142743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:30.760 [2024-12-09 23:17:09.142782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:35:30.760 [2024-12-09 23:17:09.142796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.713 ms 00:35:30.760 [2024-12-09 23:17:09.142803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:30.760 [2024-12-09 23:17:09.142929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:30.760 [2024-12-09 23:17:09.142939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:35:30.760 [2024-12-09 23:17:09.142950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:35:30.760 [2024-12-09 23:17:09.142957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:30.760 [2024-12-09 23:17:09.173042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:30.760 [2024-12-09 23:17:09.173078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:35:30.760 [2024-12-09 23:17:09.173089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.060 ms 00:35:30.760 [2024-12-09 23:17:09.173097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:30.760 [2024-12-09 23:17:09.173157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:30.760 [2024-12-09 23:17:09.173167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:35:30.760 [2024-12-09 23:17:09.173176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:35:30.760 [2024-12-09 23:17:09.173184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:30.760 [2024-12-09 23:17:09.173527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:30.760 [2024-12-09 23:17:09.173543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:35:30.760 [2024-12-09 23:17:09.173553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.320 ms 00:35:30.760 [2024-12-09 23:17:09.173561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:30.760 [2024-12-09 23:17:09.173684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:30.760 [2024-12-09 23:17:09.173693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:35:30.760 [2024-12-09 23:17:09.173703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:35:30.760 [2024-12-09 23:17:09.173710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:30.760 [2024-12-09 23:17:09.187746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:30.760 [2024-12-09 23:17:09.187777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:35:30.760 [2024-12-09 23:17:09.187789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.015 ms 00:35:30.760 [2024-12-09 23:17:09.187796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:30.760 [2024-12-09 23:17:09.215068] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:35:30.760 [2024-12-09 23:17:09.215112] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:35:30.760 [2024-12-09 23:17:09.215131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:30.760 [2024-12-09 23:17:09.215140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:35:30.760 [2024-12-09 23:17:09.215153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.223 ms 00:35:30.760 [2024-12-09 23:17:09.215166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:31.018 [2024-12-09 23:17:09.239517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:31.018 [2024-12-09 23:17:09.239554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:35:31.018 [2024-12-09 23:17:09.239567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.256 ms 00:35:31.018 [2024-12-09 23:17:09.239577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:31.018 [2024-12-09 23:17:09.251389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:31.018 [2024-12-09 23:17:09.251420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:35:31.018 [2024-12-09 23:17:09.251434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.740 ms 00:35:31.018 [2024-12-09 23:17:09.251442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:31.018 [2024-12-09 23:17:09.263246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:31.018 [2024-12-09 23:17:09.263284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:35:31.018 [2024-12-09 23:17:09.263298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.737 ms 00:35:31.018 [2024-12-09 23:17:09.263306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:31.018 [2024-12-09 23:17:09.263930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:31.018 [2024-12-09 23:17:09.263954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:35:31.018 [2024-12-09 23:17:09.263964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.516 ms 00:35:31.018 [2024-12-09 23:17:09.263971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:31.018 [2024-12-09 23:17:09.318265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:31.018 [2024-12-09 23:17:09.318312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:35:31.018 [2024-12-09 23:17:09.318326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.267 ms 00:35:31.018 [2024-12-09 23:17:09.318335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:31.018 [2024-12-09 23:17:09.328990] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:35:31.018 [2024-12-09 23:17:09.342467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:31.019 [2024-12-09 23:17:09.342512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:35:31.019 [2024-12-09 23:17:09.342523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.980 ms 00:35:31.019 [2024-12-09 23:17:09.342533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:31.019 [2024-12-09 23:17:09.342610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:31.019 [2024-12-09 23:17:09.342622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:35:31.019 [2024-12-09 23:17:09.342631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:35:31.019 [2024-12-09 23:17:09.342639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:31.019 [2024-12-09 23:17:09.342687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:31.019 [2024-12-09 23:17:09.342697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:35:31.019 [2024-12-09 23:17:09.342707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:35:31.019 [2024-12-09 23:17:09.342716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:31.019 [2024-12-09 23:17:09.342737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:31.019 [2024-12-09 23:17:09.342746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:35:31.019 [2024-12-09 23:17:09.342754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:35:31.019 [2024-12-09 23:17:09.342765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:31.019 [2024-12-09 23:17:09.342796] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:35:31.019 [2024-12-09 23:17:09.342811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:31.019 [2024-12-09 23:17:09.342818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:35:31.019 [2024-12-09 23:17:09.342827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:35:31.019 [2024-12-09 23:17:09.342836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:31.019 [2024-12-09 23:17:09.365429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:31.019 [2024-12-09 23:17:09.365462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:35:31.019 [2024-12-09 23:17:09.365475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.568 ms 00:35:31.019 [2024-12-09 23:17:09.365484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:31.019 [2024-12-09 23:17:09.365568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:31.019 [2024-12-09 23:17:09.365578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:35:31.019 [2024-12-09 23:17:09.365590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:35:31.019 [2024-12-09 23:17:09.365598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:31.019 [2024-12-09 23:17:09.366663] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:35:31.019 [2024-12-09 23:17:09.369593] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 276.764 ms, result 0 00:35:31.019 [2024-12-09 23:17:09.370314] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:35:31.019 Some configs were skipped because the RPC state that can call them passed over. 00:35:31.019 23:17:09 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:35:31.276 [2024-12-09 23:17:09.680454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:31.276 [2024-12-09 23:17:09.680507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:35:31.276 [2024-12-09 23:17:09.680520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.285 ms 00:35:31.276 [2024-12-09 23:17:09.680529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:31.277 [2024-12-09 23:17:09.680565] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.398 ms, result 0 00:35:31.277 true 00:35:31.277 23:17:09 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:35:31.534 [2024-12-09 23:17:09.881407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:31.534 [2024-12-09 23:17:09.881462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:35:31.534 [2024-12-09 23:17:09.881476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.980 ms 00:35:31.534 [2024-12-09 23:17:09.881483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:31.534 [2024-12-09 23:17:09.881520] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.096 ms, result 0 00:35:31.534 true 00:35:31.534 23:17:09 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 76907 00:35:31.534 23:17:09 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 76907 ']' 00:35:31.534 23:17:09 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 76907 00:35:31.534 23:17:09 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:35:31.534 23:17:09 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:31.534 23:17:09 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76907 00:35:31.534 killing process with pid 76907 00:35:31.534 23:17:09 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:31.534 23:17:09 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:31.534 23:17:09 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76907' 00:35:31.534 23:17:09 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 76907 00:35:31.534 23:17:09 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 76907 00:35:32.467 [2024-12-09 23:17:10.625617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:32.467 [2024-12-09 23:17:10.625678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:35:32.467 [2024-12-09 23:17:10.625692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:35:32.467 [2024-12-09 23:17:10.625704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:32.467 [2024-12-09 23:17:10.625725] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:35:32.467 [2024-12-09 23:17:10.628337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:32.467 [2024-12-09 23:17:10.628364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:35:32.467 [2024-12-09 23:17:10.628378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.594 ms 00:35:32.467 [2024-12-09 23:17:10.628386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:32.467 [2024-12-09 23:17:10.628684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:32.467 [2024-12-09 23:17:10.628694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:35:32.467 [2024-12-09 23:17:10.628706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.262 ms 00:35:32.467 [2024-12-09 23:17:10.628714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:32.467 [2024-12-09 23:17:10.633298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:32.467 [2024-12-09 23:17:10.633327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:35:32.467 [2024-12-09 23:17:10.633338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.561 ms 00:35:32.467 [2024-12-09 23:17:10.633345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:32.467 [2024-12-09 23:17:10.640203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:32.467 [2024-12-09 23:17:10.640243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:35:32.467 [2024-12-09 23:17:10.640258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.816 ms 00:35:32.467 [2024-12-09 23:17:10.640267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:32.467 [2024-12-09 23:17:10.650415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:32.467 [2024-12-09 23:17:10.650451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:35:32.467 [2024-12-09 23:17:10.650466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.094 ms 00:35:32.467 [2024-12-09 23:17:10.650474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:32.467 [2024-12-09 23:17:10.657978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:32.467 [2024-12-09 23:17:10.658011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:35:32.467 [2024-12-09 23:17:10.658023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.465 ms 00:35:32.467 [2024-12-09 23:17:10.658031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:32.467 [2024-12-09 23:17:10.658173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:32.467 [2024-12-09 23:17:10.658184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:35:32.467 [2024-12-09 23:17:10.658195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:35:32.467 [2024-12-09 23:17:10.658204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:32.467 [2024-12-09 23:17:10.668829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:32.467 [2024-12-09 23:17:10.668857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:35:32.467 [2024-12-09 23:17:10.668869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.582 ms 00:35:32.467 [2024-12-09 23:17:10.668877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:32.467 [2024-12-09 23:17:10.678663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:32.467 [2024-12-09 23:17:10.678691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:35:32.467 [2024-12-09 23:17:10.678709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.750 ms 00:35:32.467 [2024-12-09 23:17:10.678717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:32.467 [2024-12-09 23:17:10.687849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:32.467 [2024-12-09 23:17:10.687975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:35:32.467 [2024-12-09 23:17:10.687993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.097 ms 00:35:32.467 [2024-12-09 23:17:10.688000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:32.467 [2024-12-09 23:17:10.698103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:32.467 [2024-12-09 23:17:10.698130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:35:32.467 [2024-12-09 23:17:10.698142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.044 ms 00:35:32.467 [2024-12-09 23:17:10.698149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:32.467 [2024-12-09 23:17:10.698182] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:35:32.467 [2024-12-09 23:17:10.698195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:35:32.467 [2024-12-09 23:17:10.698209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:35:32.467 [2024-12-09 23:17:10.698231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:35:32.467 [2024-12-09 23:17:10.698241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:35:32.467 [2024-12-09 23:17:10.698249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:35:32.467 [2024-12-09 23:17:10.698260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:35:32.467 [2024-12-09 23:17:10.698268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:35:32.467 [2024-12-09 23:17:10.698277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:35:32.467 [2024-12-09 23:17:10.698284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:35:32.467 [2024-12-09 23:17:10.698293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:35:32.467 [2024-12-09 23:17:10.698301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:35:32.467 [2024-12-09 23:17:10.698310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:35:32.467 [2024-12-09 23:17:10.698317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:35:32.467 [2024-12-09 23:17:10.698328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:35:32.467 [2024-12-09 23:17:10.698335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:35:32.467 [2024-12-09 23:17:10.698344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:35:32.467 [2024-12-09 23:17:10.698352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:35:32.467 [2024-12-09 23:17:10.698360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:35:32.467 [2024-12-09 23:17:10.698387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:35:32.467 [2024-12-09 23:17:10.698396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:35:32.467 [2024-12-09 23:17:10.698403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:35:32.467 [2024-12-09 23:17:10.698414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:35:32.467 [2024-12-09 23:17:10.698422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:35:32.467 [2024-12-09 23:17:10.698431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:35:32.467 [2024-12-09 23:17:10.698438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:35:32.467 [2024-12-09 23:17:10.698448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:35:32.467 [2024-12-09 23:17:10.698456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:35:32.468 [2024-12-09 23:17:10.698465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:35:32.468 [2024-12-09 23:17:10.698472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:35:32.468 [2024-12-09 23:17:10.698482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:35:32.468 [2024-12-09 23:17:10.698490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:35:32.468 [2024-12-09 23:17:10.698498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:35:32.468 [2024-12-09 23:17:10.698506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:35:32.468 [2024-12-09 23:17:10.698515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:35:32.468 [2024-12-09 23:17:10.698522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:35:32.468 [2024-12-09 23:17:10.698531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:35:32.468 [2024-12-09 23:17:10.698539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:35:32.468 [2024-12-09 23:17:10.698550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:35:32.468 [2024-12-09 23:17:10.698557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:35:32.468 [2024-12-09 23:17:10.698568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:35:32.468 [2024-12-09 23:17:10.698575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:35:32.468 [2024-12-09 23:17:10.698584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:35:32.468 [2024-12-09 23:17:10.698591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:35:32.468 [2024-12-09 23:17:10.698600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:35:32.468 [2024-12-09 23:17:10.698608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:35:32.468 [2024-12-09 23:17:10.698617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:35:32.468 [2024-12-09 23:17:10.698625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:35:32.468 [2024-12-09 23:17:10.698640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:35:32.468 [2024-12-09 23:17:10.698647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:35:32.468 [2024-12-09 23:17:10.698656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:35:32.468 [2024-12-09 23:17:10.698663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:35:32.468 [2024-12-09 23:17:10.698672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:35:32.468 [2024-12-09 23:17:10.698679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:35:32.468 [2024-12-09 23:17:10.698690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:35:32.468 [2024-12-09 23:17:10.698697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:35:32.468 [2024-12-09 23:17:10.698706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:35:32.468 [2024-12-09 23:17:10.698713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:35:32.468 [2024-12-09 23:17:10.698722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:35:32.468 [2024-12-09 23:17:10.698729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:35:32.468 [2024-12-09 23:17:10.698738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:35:32.468 [2024-12-09 23:17:10.698745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:35:32.468 [2024-12-09 23:17:10.698756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:35:32.468 [2024-12-09 23:17:10.698763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:35:32.468 [2024-12-09 23:17:10.698772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:35:32.468 [2024-12-09 23:17:10.698779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:35:32.468 [2024-12-09 23:17:10.698788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:35:32.468 [2024-12-09 23:17:10.698795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:35:32.468 [2024-12-09 23:17:10.698805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:35:32.468 [2024-12-09 23:17:10.698813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:35:32.468 [2024-12-09 23:17:10.698823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:35:32.468 [2024-12-09 23:17:10.698830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:35:32.468 [2024-12-09 23:17:10.698839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:35:32.468 [2024-12-09 23:17:10.698846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:35:32.468 [2024-12-09 23:17:10.698855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:35:32.468 [2024-12-09 23:17:10.698862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:35:32.468 [2024-12-09 23:17:10.698871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:35:32.468 [2024-12-09 23:17:10.698878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:35:32.468 [2024-12-09 23:17:10.698887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:35:32.468 [2024-12-09 23:17:10.698894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:35:32.468 [2024-12-09 23:17:10.698903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:35:32.468 [2024-12-09 23:17:10.698910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:35:32.468 [2024-12-09 23:17:10.698918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:35:32.468 [2024-12-09 23:17:10.698926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:35:32.468 [2024-12-09 23:17:10.698935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:35:32.468 [2024-12-09 23:17:10.698943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:35:32.468 [2024-12-09 23:17:10.698953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:35:32.468 [2024-12-09 23:17:10.698960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:35:32.468 [2024-12-09 23:17:10.698969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:35:32.468 [2024-12-09 23:17:10.698977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:35:32.468 [2024-12-09 23:17:10.698986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:35:32.468 [2024-12-09 23:17:10.698993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:35:32.468 [2024-12-09 23:17:10.699001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:35:32.468 [2024-12-09 23:17:10.699009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:35:32.468 [2024-12-09 23:17:10.699020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:35:32.468 [2024-12-09 23:17:10.699027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:35:32.468 [2024-12-09 23:17:10.699036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:35:32.468 [2024-12-09 23:17:10.699043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:35:32.468 [2024-12-09 23:17:10.699052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:35:32.468 [2024-12-09 23:17:10.699060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:35:32.468 [2024-12-09 23:17:10.699068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:35:32.468 [2024-12-09 23:17:10.699089] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:35:32.468 [2024-12-09 23:17:10.699102] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 8d08cbd7-a528-4f3d-b495-445a47785ac7 00:35:32.468 [2024-12-09 23:17:10.699110] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:35:32.468 [2024-12-09 23:17:10.699118] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:35:32.468 [2024-12-09 23:17:10.699125] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:35:32.468 [2024-12-09 23:17:10.699134] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:35:32.468 [2024-12-09 23:17:10.699141] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:35:32.468 [2024-12-09 23:17:10.699150] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:35:32.468 [2024-12-09 23:17:10.699156] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:35:32.468 [2024-12-09 23:17:10.699164] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:35:32.468 [2024-12-09 23:17:10.699170] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:35:32.468 [2024-12-09 23:17:10.699178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:32.468 [2024-12-09 23:17:10.699185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:35:32.468 [2024-12-09 23:17:10.699195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.998 ms 00:35:32.468 [2024-12-09 23:17:10.699204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:32.468 [2024-12-09 23:17:10.711678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:32.468 [2024-12-09 23:17:10.711706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:35:32.468 [2024-12-09 23:17:10.711721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.435 ms 00:35:32.468 [2024-12-09 23:17:10.711729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:32.468 [2024-12-09 23:17:10.712085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:32.468 [2024-12-09 23:17:10.712097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:35:32.468 [2024-12-09 23:17:10.712106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.315 ms 00:35:32.468 [2024-12-09 23:17:10.712114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:32.469 [2024-12-09 23:17:10.755599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:32.469 [2024-12-09 23:17:10.755635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:35:32.469 [2024-12-09 23:17:10.755648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:32.469 [2024-12-09 23:17:10.755657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:32.469 [2024-12-09 23:17:10.755757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:32.469 [2024-12-09 23:17:10.755769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:35:32.469 [2024-12-09 23:17:10.755779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:32.469 [2024-12-09 23:17:10.755786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:32.469 [2024-12-09 23:17:10.755832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:32.469 [2024-12-09 23:17:10.755841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:35:32.469 [2024-12-09 23:17:10.755852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:32.469 [2024-12-09 23:17:10.755860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:32.469 [2024-12-09 23:17:10.755897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:32.469 [2024-12-09 23:17:10.755905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:35:32.469 [2024-12-09 23:17:10.755914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:32.469 [2024-12-09 23:17:10.755923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:32.469 [2024-12-09 23:17:10.831985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:32.469 [2024-12-09 23:17:10.832030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:35:32.469 [2024-12-09 23:17:10.832044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:32.469 [2024-12-09 23:17:10.832053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:32.469 [2024-12-09 23:17:10.893525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:32.469 [2024-12-09 23:17:10.893705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:35:32.469 [2024-12-09 23:17:10.893728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:32.469 [2024-12-09 23:17:10.893736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:32.469 [2024-12-09 23:17:10.893816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:32.469 [2024-12-09 23:17:10.893826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:35:32.469 [2024-12-09 23:17:10.893838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:32.469 [2024-12-09 23:17:10.893845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:32.469 [2024-12-09 23:17:10.893874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:32.469 [2024-12-09 23:17:10.893882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:35:32.469 [2024-12-09 23:17:10.893891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:32.469 [2024-12-09 23:17:10.893898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:32.469 [2024-12-09 23:17:10.893989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:32.469 [2024-12-09 23:17:10.893998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:35:32.469 [2024-12-09 23:17:10.894007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:32.469 [2024-12-09 23:17:10.894015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:32.469 [2024-12-09 23:17:10.894046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:32.469 [2024-12-09 23:17:10.894055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:35:32.469 [2024-12-09 23:17:10.894064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:32.469 [2024-12-09 23:17:10.894071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:32.469 [2024-12-09 23:17:10.894110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:32.469 [2024-12-09 23:17:10.894118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:35:32.469 [2024-12-09 23:17:10.894129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:32.469 [2024-12-09 23:17:10.894136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:32.469 [2024-12-09 23:17:10.894178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:32.469 [2024-12-09 23:17:10.894186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:35:32.469 [2024-12-09 23:17:10.894196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:32.469 [2024-12-09 23:17:10.894203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:32.469 [2024-12-09 23:17:10.894354] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 268.719 ms, result 0 00:35:33.406 23:17:11 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:35:33.406 23:17:11 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:35:33.406 [2024-12-09 23:17:11.617751] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:35:33.406 [2024-12-09 23:17:11.617867] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76964 ] 00:35:33.406 [2024-12-09 23:17:11.771314] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:33.674 [2024-12-09 23:17:11.868079] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:33.674 [2024-12-09 23:17:12.123609] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:35:33.674 [2024-12-09 23:17:12.123671] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:35:33.933 [2024-12-09 23:17:12.277604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:33.933 [2024-12-09 23:17:12.277654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:35:33.933 [2024-12-09 23:17:12.277666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:35:33.933 [2024-12-09 23:17:12.277675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:33.933 [2024-12-09 23:17:12.280304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:33.933 [2024-12-09 23:17:12.280336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:35:33.933 [2024-12-09 23:17:12.280346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.611 ms 00:35:33.933 [2024-12-09 23:17:12.280353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:33.933 [2024-12-09 23:17:12.280422] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:35:33.933 [2024-12-09 23:17:12.281070] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:35:33.933 [2024-12-09 23:17:12.281096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:33.933 [2024-12-09 23:17:12.281103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:35:33.933 [2024-12-09 23:17:12.281112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.681 ms 00:35:33.933 [2024-12-09 23:17:12.281119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:33.933 [2024-12-09 23:17:12.282200] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:35:33.933 [2024-12-09 23:17:12.294288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:33.933 [2024-12-09 23:17:12.294322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:35:33.933 [2024-12-09 23:17:12.294335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.090 ms 00:35:33.933 [2024-12-09 23:17:12.294343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:33.933 [2024-12-09 23:17:12.294429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:33.933 [2024-12-09 23:17:12.294440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:35:33.933 [2024-12-09 23:17:12.294448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:35:33.933 [2024-12-09 23:17:12.294456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:33.933 [2024-12-09 23:17:12.299212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:33.933 [2024-12-09 23:17:12.299248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:35:33.933 [2024-12-09 23:17:12.299257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.716 ms 00:35:33.933 [2024-12-09 23:17:12.299283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:33.933 [2024-12-09 23:17:12.299365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:33.933 [2024-12-09 23:17:12.299375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:35:33.933 [2024-12-09 23:17:12.299383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:35:33.933 [2024-12-09 23:17:12.299390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:33.933 [2024-12-09 23:17:12.299416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:33.933 [2024-12-09 23:17:12.299424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:35:33.933 [2024-12-09 23:17:12.299431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:35:33.933 [2024-12-09 23:17:12.299438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:33.933 [2024-12-09 23:17:12.299459] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:35:33.934 [2024-12-09 23:17:12.302715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:33.934 [2024-12-09 23:17:12.302873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:35:33.934 [2024-12-09 23:17:12.302889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.262 ms 00:35:33.934 [2024-12-09 23:17:12.302897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:33.934 [2024-12-09 23:17:12.302934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:33.934 [2024-12-09 23:17:12.302943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:35:33.934 [2024-12-09 23:17:12.302951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:35:33.934 [2024-12-09 23:17:12.302958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:33.934 [2024-12-09 23:17:12.302977] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:35:33.934 [2024-12-09 23:17:12.302995] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:35:33.934 [2024-12-09 23:17:12.303029] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:35:33.934 [2024-12-09 23:17:12.303043] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:35:33.934 [2024-12-09 23:17:12.303144] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:35:33.934 [2024-12-09 23:17:12.303154] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:35:33.934 [2024-12-09 23:17:12.303165] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:35:33.934 [2024-12-09 23:17:12.303177] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:35:33.934 [2024-12-09 23:17:12.303185] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:35:33.934 [2024-12-09 23:17:12.303193] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:35:33.934 [2024-12-09 23:17:12.303200] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:35:33.934 [2024-12-09 23:17:12.303207] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:35:33.934 [2024-12-09 23:17:12.303214] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:35:33.934 [2024-12-09 23:17:12.303238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:33.934 [2024-12-09 23:17:12.303246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:35:33.934 [2024-12-09 23:17:12.303254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.263 ms 00:35:33.934 [2024-12-09 23:17:12.303261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:33.934 [2024-12-09 23:17:12.303348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:33.934 [2024-12-09 23:17:12.303359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:35:33.934 [2024-12-09 23:17:12.303366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:35:33.934 [2024-12-09 23:17:12.303373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:33.934 [2024-12-09 23:17:12.303484] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:35:33.934 [2024-12-09 23:17:12.303494] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:35:33.934 [2024-12-09 23:17:12.303502] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:35:33.934 [2024-12-09 23:17:12.303510] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:33.934 [2024-12-09 23:17:12.303517] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:35:33.934 [2024-12-09 23:17:12.303524] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:35:33.934 [2024-12-09 23:17:12.303530] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:35:33.934 [2024-12-09 23:17:12.303537] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:35:33.934 [2024-12-09 23:17:12.303544] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:35:33.934 [2024-12-09 23:17:12.303551] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:35:33.934 [2024-12-09 23:17:12.303557] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:35:33.934 [2024-12-09 23:17:12.303569] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:35:33.934 [2024-12-09 23:17:12.303576] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:35:33.934 [2024-12-09 23:17:12.303582] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:35:33.934 [2024-12-09 23:17:12.303589] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:35:33.934 [2024-12-09 23:17:12.303595] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:33.934 [2024-12-09 23:17:12.303601] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:35:33.934 [2024-12-09 23:17:12.303607] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:35:33.934 [2024-12-09 23:17:12.303613] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:33.934 [2024-12-09 23:17:12.303620] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:35:33.934 [2024-12-09 23:17:12.303626] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:35:33.934 [2024-12-09 23:17:12.303633] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:35:33.934 [2024-12-09 23:17:12.303640] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:35:33.934 [2024-12-09 23:17:12.303647] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:35:33.934 [2024-12-09 23:17:12.303654] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:35:33.934 [2024-12-09 23:17:12.303660] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:35:33.934 [2024-12-09 23:17:12.303667] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:35:33.934 [2024-12-09 23:17:12.303673] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:35:33.934 [2024-12-09 23:17:12.303679] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:35:33.934 [2024-12-09 23:17:12.303686] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:35:33.934 [2024-12-09 23:17:12.303692] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:35:33.934 [2024-12-09 23:17:12.303699] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:35:33.934 [2024-12-09 23:17:12.303705] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:35:33.934 [2024-12-09 23:17:12.303711] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:35:33.934 [2024-12-09 23:17:12.303717] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:35:33.934 [2024-12-09 23:17:12.303724] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:35:33.934 [2024-12-09 23:17:12.303730] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:35:33.934 [2024-12-09 23:17:12.303736] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:35:33.934 [2024-12-09 23:17:12.303743] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:35:33.934 [2024-12-09 23:17:12.303749] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:33.934 [2024-12-09 23:17:12.303756] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:35:33.934 [2024-12-09 23:17:12.303762] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:35:33.934 [2024-12-09 23:17:12.303769] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:33.934 [2024-12-09 23:17:12.303775] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:35:33.934 [2024-12-09 23:17:12.303783] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:35:33.934 [2024-12-09 23:17:12.303792] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:35:33.934 [2024-12-09 23:17:12.303798] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:33.934 [2024-12-09 23:17:12.303806] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:35:33.934 [2024-12-09 23:17:12.303812] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:35:33.934 [2024-12-09 23:17:12.303818] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:35:33.934 [2024-12-09 23:17:12.303824] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:35:33.934 [2024-12-09 23:17:12.303831] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:35:33.934 [2024-12-09 23:17:12.303837] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:35:33.934 [2024-12-09 23:17:12.303844] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:35:33.934 [2024-12-09 23:17:12.303854] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:35:33.934 [2024-12-09 23:17:12.303862] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:35:33.934 [2024-12-09 23:17:12.303869] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:35:33.934 [2024-12-09 23:17:12.303876] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:35:33.934 [2024-12-09 23:17:12.303884] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:35:33.934 [2024-12-09 23:17:12.303890] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:35:33.934 [2024-12-09 23:17:12.303897] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:35:33.934 [2024-12-09 23:17:12.303904] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:35:33.934 [2024-12-09 23:17:12.303911] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:35:33.934 [2024-12-09 23:17:12.303918] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:35:33.934 [2024-12-09 23:17:12.303925] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:35:33.934 [2024-12-09 23:17:12.303932] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:35:33.934 [2024-12-09 23:17:12.303939] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:35:33.934 [2024-12-09 23:17:12.303946] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:35:33.934 [2024-12-09 23:17:12.303953] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:35:33.934 [2024-12-09 23:17:12.303960] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:35:33.934 [2024-12-09 23:17:12.303968] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:35:33.934 [2024-12-09 23:17:12.303976] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:35:33.935 [2024-12-09 23:17:12.303983] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:35:33.935 [2024-12-09 23:17:12.303990] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:35:33.935 [2024-12-09 23:17:12.303997] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:35:33.935 [2024-12-09 23:17:12.304004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:33.935 [2024-12-09 23:17:12.304014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:35:33.935 [2024-12-09 23:17:12.304021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.589 ms 00:35:33.935 [2024-12-09 23:17:12.304027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:33.935 [2024-12-09 23:17:12.329417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:33.935 [2024-12-09 23:17:12.329543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:35:33.935 [2024-12-09 23:17:12.329558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.339 ms 00:35:33.935 [2024-12-09 23:17:12.329566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:33.935 [2024-12-09 23:17:12.329682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:33.935 [2024-12-09 23:17:12.329692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:35:33.935 [2024-12-09 23:17:12.329701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:35:33.935 [2024-12-09 23:17:12.329708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:33.935 [2024-12-09 23:17:12.373283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:33.935 [2024-12-09 23:17:12.373321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:35:33.935 [2024-12-09 23:17:12.373336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.556 ms 00:35:33.935 [2024-12-09 23:17:12.373344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:33.935 [2024-12-09 23:17:12.373441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:33.935 [2024-12-09 23:17:12.373453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:35:33.935 [2024-12-09 23:17:12.373462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:35:33.935 [2024-12-09 23:17:12.373469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:33.935 [2024-12-09 23:17:12.373778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:33.935 [2024-12-09 23:17:12.373798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:35:33.935 [2024-12-09 23:17:12.373810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.288 ms 00:35:33.935 [2024-12-09 23:17:12.373818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:33.935 [2024-12-09 23:17:12.373942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:33.935 [2024-12-09 23:17:12.373956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:35:33.935 [2024-12-09 23:17:12.373964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:35:33.935 [2024-12-09 23:17:12.373971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:33.935 [2024-12-09 23:17:12.387093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:33.935 [2024-12-09 23:17:12.387125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:35:33.935 [2024-12-09 23:17:12.387135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.103 ms 00:35:33.935 [2024-12-09 23:17:12.387143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:34.193 [2024-12-09 23:17:12.399137] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:35:34.193 [2024-12-09 23:17:12.399169] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:35:34.193 [2024-12-09 23:17:12.399180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:34.193 [2024-12-09 23:17:12.399188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:35:34.193 [2024-12-09 23:17:12.399197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.926 ms 00:35:34.193 [2024-12-09 23:17:12.399204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:34.193 [2024-12-09 23:17:12.423102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:34.193 [2024-12-09 23:17:12.423135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:35:34.193 [2024-12-09 23:17:12.423146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.818 ms 00:35:34.193 [2024-12-09 23:17:12.423154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:34.193 [2024-12-09 23:17:12.434171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:34.193 [2024-12-09 23:17:12.434200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:35:34.193 [2024-12-09 23:17:12.434210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.930 ms 00:35:34.193 [2024-12-09 23:17:12.434228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:34.193 [2024-12-09 23:17:12.444983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:34.193 [2024-12-09 23:17:12.445104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:35:34.193 [2024-12-09 23:17:12.445120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.695 ms 00:35:34.193 [2024-12-09 23:17:12.445127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:34.193 [2024-12-09 23:17:12.445749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:34.193 [2024-12-09 23:17:12.445771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:35:34.193 [2024-12-09 23:17:12.445780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.537 ms 00:35:34.193 [2024-12-09 23:17:12.445787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:34.193 [2024-12-09 23:17:12.499429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:34.193 [2024-12-09 23:17:12.499606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:35:34.193 [2024-12-09 23:17:12.499624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.618 ms 00:35:34.193 [2024-12-09 23:17:12.499632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:34.193 [2024-12-09 23:17:12.509886] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:35:34.193 [2024-12-09 23:17:12.523387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:34.193 [2024-12-09 23:17:12.523421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:35:34.193 [2024-12-09 23:17:12.523433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.666 ms 00:35:34.193 [2024-12-09 23:17:12.523444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:34.193 [2024-12-09 23:17:12.523522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:34.193 [2024-12-09 23:17:12.523533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:35:34.193 [2024-12-09 23:17:12.523541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:35:34.193 [2024-12-09 23:17:12.523548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:34.194 [2024-12-09 23:17:12.523592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:34.194 [2024-12-09 23:17:12.523600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:35:34.194 [2024-12-09 23:17:12.523607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:35:34.194 [2024-12-09 23:17:12.523618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:34.194 [2024-12-09 23:17:12.523647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:34.194 [2024-12-09 23:17:12.523656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:35:34.194 [2024-12-09 23:17:12.523664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:35:34.194 [2024-12-09 23:17:12.523671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:34.194 [2024-12-09 23:17:12.523700] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:35:34.194 [2024-12-09 23:17:12.523709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:34.194 [2024-12-09 23:17:12.523716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:35:34.194 [2024-12-09 23:17:12.523724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:35:34.194 [2024-12-09 23:17:12.523731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:34.194 [2024-12-09 23:17:12.546156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:34.194 [2024-12-09 23:17:12.546191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:35:34.194 [2024-12-09 23:17:12.546203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.403 ms 00:35:34.194 [2024-12-09 23:17:12.546212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:34.194 [2024-12-09 23:17:12.546308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:34.194 [2024-12-09 23:17:12.546319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:35:34.194 [2024-12-09 23:17:12.546327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:35:34.194 [2024-12-09 23:17:12.546335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:34.194 [2024-12-09 23:17:12.547433] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:35:34.194 [2024-12-09 23:17:12.550566] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 269.548 ms, result 0 00:35:34.194 [2024-12-09 23:17:12.551149] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:35:34.194 [2024-12-09 23:17:12.563900] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:35:35.128  [2024-12-09T23:17:14.962Z] Copying: 43/256 [MB] (43 MBps) [2024-12-09T23:17:15.896Z] Copying: 61/256 [MB] (17 MBps) [2024-12-09T23:17:16.828Z] Copying: 81/256 [MB] (20 MBps) [2024-12-09T23:17:17.762Z] Copying: 98/256 [MB] (16 MBps) [2024-12-09T23:17:18.698Z] Copying: 129/256 [MB] (31 MBps) [2024-12-09T23:17:19.699Z] Copying: 155/256 [MB] (25 MBps) [2024-12-09T23:17:20.632Z] Copying: 170/256 [MB] (14 MBps) [2024-12-09T23:17:22.008Z] Copying: 197/256 [MB] (27 MBps) [2024-12-09T23:17:22.581Z] Copying: 219/256 [MB] (21 MBps) [2024-12-09T23:17:23.977Z] Copying: 239/256 [MB] (19 MBps) [2024-12-09T23:17:23.977Z] Copying: 251/256 [MB] (12 MBps) [2024-12-09T23:17:23.977Z] Copying: 256/256 [MB] (average 22 MBps)[2024-12-09 23:17:23.851846] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:35:45.515 [2024-12-09 23:17:23.861793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:45.515 [2024-12-09 23:17:23.861834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:35:45.515 [2024-12-09 23:17:23.861858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:35:45.515 [2024-12-09 23:17:23.861867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:45.515 [2024-12-09 23:17:23.861890] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:35:45.515 [2024-12-09 23:17:23.864693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:45.515 [2024-12-09 23:17:23.864724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:35:45.515 [2024-12-09 23:17:23.864735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.789 ms 00:35:45.515 [2024-12-09 23:17:23.864744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:45.515 [2024-12-09 23:17:23.865005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:45.515 [2024-12-09 23:17:23.865022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:35:45.515 [2024-12-09 23:17:23.865032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.239 ms 00:35:45.516 [2024-12-09 23:17:23.865041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:45.516 [2024-12-09 23:17:23.868748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:45.516 [2024-12-09 23:17:23.868768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:35:45.516 [2024-12-09 23:17:23.868778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.687 ms 00:35:45.516 [2024-12-09 23:17:23.868787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:45.516 [2024-12-09 23:17:23.875648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:45.516 [2024-12-09 23:17:23.875801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:35:45.516 [2024-12-09 23:17:23.875818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.843 ms 00:35:45.516 [2024-12-09 23:17:23.875826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:45.516 [2024-12-09 23:17:23.900870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:45.516 [2024-12-09 23:17:23.900908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:35:45.516 [2024-12-09 23:17:23.900921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.981 ms 00:35:45.516 [2024-12-09 23:17:23.900929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:45.516 [2024-12-09 23:17:23.916935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:45.516 [2024-12-09 23:17:23.916976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:35:45.516 [2024-12-09 23:17:23.916996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.944 ms 00:35:45.516 [2024-12-09 23:17:23.917006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:45.516 [2024-12-09 23:17:23.917153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:45.516 [2024-12-09 23:17:23.917165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:35:45.516 [2024-12-09 23:17:23.917182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:35:45.516 [2024-12-09 23:17:23.917190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:45.516 [2024-12-09 23:17:23.941501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:45.516 [2024-12-09 23:17:23.941540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:35:45.516 [2024-12-09 23:17:23.941552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.294 ms 00:35:45.516 [2024-12-09 23:17:23.941560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:45.516 [2024-12-09 23:17:23.965615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:45.516 [2024-12-09 23:17:23.965652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:35:45.516 [2024-12-09 23:17:23.965665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.016 ms 00:35:45.516 [2024-12-09 23:17:23.965673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:45.776 [2024-12-09 23:17:23.989693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:45.776 [2024-12-09 23:17:23.989730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:35:45.776 [2024-12-09 23:17:23.989742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.980 ms 00:35:45.776 [2024-12-09 23:17:23.989750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:45.776 [2024-12-09 23:17:24.013558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:45.776 [2024-12-09 23:17:24.013596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:35:45.776 [2024-12-09 23:17:24.013607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.742 ms 00:35:45.776 [2024-12-09 23:17:24.013616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:45.776 [2024-12-09 23:17:24.013658] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:35:45.776 [2024-12-09 23:17:24.013675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:35:45.776 [2024-12-09 23:17:24.013687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:35:45.777 [2024-12-09 23:17:24.013695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:35:45.777 [2024-12-09 23:17:24.013703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:35:45.777 [2024-12-09 23:17:24.013711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:35:45.777 [2024-12-09 23:17:24.013720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:35:45.777 [2024-12-09 23:17:24.013727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:35:45.777 [2024-12-09 23:17:24.013735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:35:45.777 [2024-12-09 23:17:24.013743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:35:45.777 [2024-12-09 23:17:24.013751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:35:45.777 [2024-12-09 23:17:24.013759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:35:45.777 [2024-12-09 23:17:24.013767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:35:45.777 [2024-12-09 23:17:24.013775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:35:45.777 [2024-12-09 23:17:24.013783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:35:45.777 [2024-12-09 23:17:24.013790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:35:45.777 [2024-12-09 23:17:24.013798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:35:45.777 [2024-12-09 23:17:24.013805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:35:45.777 [2024-12-09 23:17:24.013813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:35:45.777 [2024-12-09 23:17:24.013820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:35:45.777 [2024-12-09 23:17:24.013827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:35:45.777 [2024-12-09 23:17:24.013834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:35:45.777 [2024-12-09 23:17:24.013842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:35:45.777 [2024-12-09 23:17:24.013849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:35:45.777 [2024-12-09 23:17:24.013857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:35:45.777 [2024-12-09 23:17:24.013864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:35:45.777 [2024-12-09 23:17:24.013872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:35:45.777 [2024-12-09 23:17:24.013881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:35:45.777 [2024-12-09 23:17:24.013888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:35:45.777 [2024-12-09 23:17:24.013896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:35:45.777 [2024-12-09 23:17:24.013903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:35:45.777 [2024-12-09 23:17:24.013911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:35:45.777 [2024-12-09 23:17:24.013918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:35:45.777 [2024-12-09 23:17:24.013925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:35:45.777 [2024-12-09 23:17:24.013936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:35:45.777 [2024-12-09 23:17:24.013944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:35:45.777 [2024-12-09 23:17:24.013952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:35:45.777 [2024-12-09 23:17:24.013960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:35:45.777 [2024-12-09 23:17:24.013968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:35:45.777 [2024-12-09 23:17:24.013975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:35:45.777 [2024-12-09 23:17:24.013983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:35:45.777 [2024-12-09 23:17:24.013991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:35:45.777 [2024-12-09 23:17:24.013999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:35:45.777 [2024-12-09 23:17:24.014007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:35:45.777 [2024-12-09 23:17:24.014014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:35:45.777 [2024-12-09 23:17:24.014022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:35:45.777 [2024-12-09 23:17:24.014030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:35:45.777 [2024-12-09 23:17:24.014037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:35:45.777 [2024-12-09 23:17:24.014045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:35:45.777 [2024-12-09 23:17:24.014052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:35:45.777 [2024-12-09 23:17:24.014060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:35:45.777 [2024-12-09 23:17:24.014067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:35:45.777 [2024-12-09 23:17:24.014075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:35:45.777 [2024-12-09 23:17:24.014083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:35:45.777 [2024-12-09 23:17:24.014091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:35:45.777 [2024-12-09 23:17:24.014098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:35:45.777 [2024-12-09 23:17:24.014107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:35:45.777 [2024-12-09 23:17:24.014114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:35:45.777 [2024-12-09 23:17:24.014122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:35:45.777 [2024-12-09 23:17:24.014130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:35:45.777 [2024-12-09 23:17:24.014138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:35:45.777 [2024-12-09 23:17:24.014146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:35:45.777 [2024-12-09 23:17:24.014153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:35:45.777 [2024-12-09 23:17:24.014160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:35:45.777 [2024-12-09 23:17:24.014168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:35:45.777 [2024-12-09 23:17:24.014175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:35:45.777 [2024-12-09 23:17:24.014184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:35:45.777 [2024-12-09 23:17:24.014191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:35:45.777 [2024-12-09 23:17:24.014199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:35:45.777 [2024-12-09 23:17:24.014206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:35:45.777 [2024-12-09 23:17:24.014214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:35:45.777 [2024-12-09 23:17:24.014233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:35:45.777 [2024-12-09 23:17:24.014241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:35:45.777 [2024-12-09 23:17:24.014249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:35:45.777 [2024-12-09 23:17:24.014257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:35:45.777 [2024-12-09 23:17:24.014264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:35:45.777 [2024-12-09 23:17:24.014272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:35:45.777 [2024-12-09 23:17:24.014280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:35:45.777 [2024-12-09 23:17:24.014288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:35:45.777 [2024-12-09 23:17:24.014296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:35:45.777 [2024-12-09 23:17:24.014304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:35:45.777 [2024-12-09 23:17:24.014311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:35:45.777 [2024-12-09 23:17:24.014319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:35:45.777 [2024-12-09 23:17:24.014327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:35:45.777 [2024-12-09 23:17:24.014349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:35:45.777 [2024-12-09 23:17:24.014357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:35:45.777 [2024-12-09 23:17:24.014376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:35:45.777 [2024-12-09 23:17:24.014383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:35:45.777 [2024-12-09 23:17:24.014391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:35:45.777 [2024-12-09 23:17:24.014399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:35:45.777 [2024-12-09 23:17:24.014407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:35:45.777 [2024-12-09 23:17:24.014414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:35:45.777 [2024-12-09 23:17:24.014422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:35:45.777 [2024-12-09 23:17:24.014429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:35:45.777 [2024-12-09 23:17:24.014445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:35:45.778 [2024-12-09 23:17:24.014454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:35:45.778 [2024-12-09 23:17:24.014461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:35:45.778 [2024-12-09 23:17:24.014468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:35:45.778 [2024-12-09 23:17:24.014478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:35:45.778 [2024-12-09 23:17:24.014486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:35:45.778 [2024-12-09 23:17:24.014494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:35:45.778 [2024-12-09 23:17:24.014511] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:35:45.778 [2024-12-09 23:17:24.014519] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 8d08cbd7-a528-4f3d-b495-445a47785ac7 00:35:45.778 [2024-12-09 23:17:24.014528] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:35:45.778 [2024-12-09 23:17:24.014535] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:35:45.778 [2024-12-09 23:17:24.014543] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:35:45.778 [2024-12-09 23:17:24.014552] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:35:45.778 [2024-12-09 23:17:24.014559] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:35:45.778 [2024-12-09 23:17:24.014567] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:35:45.778 [2024-12-09 23:17:24.014577] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:35:45.778 [2024-12-09 23:17:24.014584] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:35:45.778 [2024-12-09 23:17:24.014590] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:35:45.778 [2024-12-09 23:17:24.014597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:45.778 [2024-12-09 23:17:24.014605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:35:45.778 [2024-12-09 23:17:24.014613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.941 ms 00:35:45.778 [2024-12-09 23:17:24.014620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:45.778 [2024-12-09 23:17:24.027982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:45.778 [2024-12-09 23:17:24.028019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:35:45.778 [2024-12-09 23:17:24.028032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.332 ms 00:35:45.778 [2024-12-09 23:17:24.028041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:45.778 [2024-12-09 23:17:24.028463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:45.778 [2024-12-09 23:17:24.028484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:35:45.778 [2024-12-09 23:17:24.028493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.384 ms 00:35:45.778 [2024-12-09 23:17:24.028501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:45.778 [2024-12-09 23:17:24.065902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:45.778 [2024-12-09 23:17:24.066166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:35:45.778 [2024-12-09 23:17:24.066188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:45.778 [2024-12-09 23:17:24.066204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:45.778 [2024-12-09 23:17:24.066312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:45.778 [2024-12-09 23:17:24.066323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:35:45.778 [2024-12-09 23:17:24.066332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:45.778 [2024-12-09 23:17:24.066340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:45.778 [2024-12-09 23:17:24.066392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:45.778 [2024-12-09 23:17:24.066401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:35:45.778 [2024-12-09 23:17:24.066409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:45.778 [2024-12-09 23:17:24.066417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:45.778 [2024-12-09 23:17:24.066438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:45.778 [2024-12-09 23:17:24.066447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:35:45.778 [2024-12-09 23:17:24.066455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:45.778 [2024-12-09 23:17:24.066462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:45.778 [2024-12-09 23:17:24.149187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:45.778 [2024-12-09 23:17:24.149413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:35:45.778 [2024-12-09 23:17:24.149435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:45.778 [2024-12-09 23:17:24.149445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:45.778 [2024-12-09 23:17:24.217066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:45.778 [2024-12-09 23:17:24.217289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:35:45.778 [2024-12-09 23:17:24.217308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:45.778 [2024-12-09 23:17:24.217318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:45.778 [2024-12-09 23:17:24.217386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:45.778 [2024-12-09 23:17:24.217396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:35:45.778 [2024-12-09 23:17:24.217428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:45.778 [2024-12-09 23:17:24.217436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:45.778 [2024-12-09 23:17:24.217468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:45.778 [2024-12-09 23:17:24.217483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:35:45.778 [2024-12-09 23:17:24.217492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:45.778 [2024-12-09 23:17:24.217500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:45.778 [2024-12-09 23:17:24.217603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:45.778 [2024-12-09 23:17:24.217615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:35:45.778 [2024-12-09 23:17:24.217623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:45.778 [2024-12-09 23:17:24.217631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:45.778 [2024-12-09 23:17:24.217664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:45.778 [2024-12-09 23:17:24.217674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:35:45.778 [2024-12-09 23:17:24.217686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:45.778 [2024-12-09 23:17:24.217695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:45.778 [2024-12-09 23:17:24.217739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:45.778 [2024-12-09 23:17:24.217749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:35:45.778 [2024-12-09 23:17:24.217758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:45.778 [2024-12-09 23:17:24.217766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:45.778 [2024-12-09 23:17:24.217811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:45.778 [2024-12-09 23:17:24.217825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:35:45.778 [2024-12-09 23:17:24.217833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:45.778 [2024-12-09 23:17:24.217842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:45.778 [2024-12-09 23:17:24.217999] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 356.183 ms, result 0 00:35:46.722 00:35:46.722 00:35:46.722 23:17:24 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:35:46.722 23:17:25 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:35:47.311 23:17:25 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:35:47.311 [2024-12-09 23:17:25.658056] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:35:47.311 [2024-12-09 23:17:25.658245] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77115 ] 00:35:47.573 [2024-12-09 23:17:25.814680] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:47.573 [2024-12-09 23:17:25.953037] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:47.848 [2024-12-09 23:17:26.262824] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:35:47.848 [2024-12-09 23:17:26.262921] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:35:48.109 [2024-12-09 23:17:26.428088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:48.109 [2024-12-09 23:17:26.428156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:35:48.109 [2024-12-09 23:17:26.428173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:35:48.109 [2024-12-09 23:17:26.428183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:48.109 [2024-12-09 23:17:26.431292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:48.109 [2024-12-09 23:17:26.431345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:35:48.109 [2024-12-09 23:17:26.431358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.086 ms 00:35:48.109 [2024-12-09 23:17:26.431366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:48.109 [2024-12-09 23:17:26.431494] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:35:48.109 [2024-12-09 23:17:26.432242] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:35:48.109 [2024-12-09 23:17:26.432273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:48.109 [2024-12-09 23:17:26.432283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:35:48.110 [2024-12-09 23:17:26.432293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.789 ms 00:35:48.110 [2024-12-09 23:17:26.432303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:48.110 [2024-12-09 23:17:26.434168] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:35:48.110 [2024-12-09 23:17:26.449020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:48.110 [2024-12-09 23:17:26.449075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:35:48.110 [2024-12-09 23:17:26.449092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.854 ms 00:35:48.110 [2024-12-09 23:17:26.449100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:48.110 [2024-12-09 23:17:26.449249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:48.110 [2024-12-09 23:17:26.449264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:35:48.110 [2024-12-09 23:17:26.449274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:35:48.110 [2024-12-09 23:17:26.449283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:48.110 [2024-12-09 23:17:26.458566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:48.110 [2024-12-09 23:17:26.458793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:35:48.110 [2024-12-09 23:17:26.458815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.231 ms 00:35:48.110 [2024-12-09 23:17:26.458824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:48.110 [2024-12-09 23:17:26.458953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:48.110 [2024-12-09 23:17:26.458965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:35:48.110 [2024-12-09 23:17:26.458975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:35:48.110 [2024-12-09 23:17:26.458983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:48.110 [2024-12-09 23:17:26.459017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:48.110 [2024-12-09 23:17:26.459027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:35:48.110 [2024-12-09 23:17:26.459036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:35:48.110 [2024-12-09 23:17:26.459043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:48.110 [2024-12-09 23:17:26.459067] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:35:48.110 [2024-12-09 23:17:26.463373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:48.110 [2024-12-09 23:17:26.463416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:35:48.110 [2024-12-09 23:17:26.463428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.312 ms 00:35:48.110 [2024-12-09 23:17:26.463436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:48.110 [2024-12-09 23:17:26.463521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:48.110 [2024-12-09 23:17:26.463533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:35:48.110 [2024-12-09 23:17:26.463543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:35:48.110 [2024-12-09 23:17:26.463551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:48.110 [2024-12-09 23:17:26.463590] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:35:48.110 [2024-12-09 23:17:26.463613] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:35:48.110 [2024-12-09 23:17:26.463651] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:35:48.110 [2024-12-09 23:17:26.463668] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:35:48.110 [2024-12-09 23:17:26.463778] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:35:48.110 [2024-12-09 23:17:26.463789] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:35:48.110 [2024-12-09 23:17:26.463802] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:35:48.110 [2024-12-09 23:17:26.463816] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:35:48.110 [2024-12-09 23:17:26.463825] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:35:48.110 [2024-12-09 23:17:26.463833] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:35:48.110 [2024-12-09 23:17:26.463841] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:35:48.110 [2024-12-09 23:17:26.463849] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:35:48.110 [2024-12-09 23:17:26.463857] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:35:48.110 [2024-12-09 23:17:26.463865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:48.110 [2024-12-09 23:17:26.463874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:35:48.110 [2024-12-09 23:17:26.463882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.279 ms 00:35:48.110 [2024-12-09 23:17:26.463891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:48.110 [2024-12-09 23:17:26.463982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:48.110 [2024-12-09 23:17:26.463993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:35:48.110 [2024-12-09 23:17:26.464001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:35:48.110 [2024-12-09 23:17:26.464010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:48.110 [2024-12-09 23:17:26.464111] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:35:48.110 [2024-12-09 23:17:26.464121] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:35:48.110 [2024-12-09 23:17:26.464130] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:35:48.110 [2024-12-09 23:17:26.464138] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:48.110 [2024-12-09 23:17:26.464146] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:35:48.110 [2024-12-09 23:17:26.464154] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:35:48.110 [2024-12-09 23:17:26.464162] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:35:48.110 [2024-12-09 23:17:26.464170] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:35:48.110 [2024-12-09 23:17:26.464177] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:35:48.110 [2024-12-09 23:17:26.464184] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:35:48.110 [2024-12-09 23:17:26.464191] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:35:48.110 [2024-12-09 23:17:26.464207] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:35:48.110 [2024-12-09 23:17:26.464252] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:35:48.110 [2024-12-09 23:17:26.464260] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:35:48.110 [2024-12-09 23:17:26.464267] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:35:48.110 [2024-12-09 23:17:26.464275] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:48.110 [2024-12-09 23:17:26.464282] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:35:48.110 [2024-12-09 23:17:26.464289] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:35:48.110 [2024-12-09 23:17:26.464296] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:48.110 [2024-12-09 23:17:26.464303] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:35:48.110 [2024-12-09 23:17:26.464311] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:35:48.110 [2024-12-09 23:17:26.464318] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:35:48.110 [2024-12-09 23:17:26.464325] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:35:48.110 [2024-12-09 23:17:26.464332] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:35:48.110 [2024-12-09 23:17:26.464339] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:35:48.110 [2024-12-09 23:17:26.464346] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:35:48.110 [2024-12-09 23:17:26.464353] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:35:48.110 [2024-12-09 23:17:26.464360] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:35:48.110 [2024-12-09 23:17:26.464368] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:35:48.110 [2024-12-09 23:17:26.464374] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:35:48.110 [2024-12-09 23:17:26.464381] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:35:48.110 [2024-12-09 23:17:26.464388] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:35:48.110 [2024-12-09 23:17:26.464396] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:35:48.110 [2024-12-09 23:17:26.464402] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:35:48.110 [2024-12-09 23:17:26.464409] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:35:48.110 [2024-12-09 23:17:26.464416] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:35:48.110 [2024-12-09 23:17:26.464423] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:35:48.110 [2024-12-09 23:17:26.464431] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:35:48.110 [2024-12-09 23:17:26.464439] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:35:48.110 [2024-12-09 23:17:26.464446] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:48.110 [2024-12-09 23:17:26.464453] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:35:48.110 [2024-12-09 23:17:26.464459] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:35:48.110 [2024-12-09 23:17:26.464472] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:48.110 [2024-12-09 23:17:26.464479] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:35:48.110 [2024-12-09 23:17:26.464487] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:35:48.110 [2024-12-09 23:17:26.464497] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:35:48.110 [2024-12-09 23:17:26.464505] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:48.110 [2024-12-09 23:17:26.464513] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:35:48.110 [2024-12-09 23:17:26.464519] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:35:48.110 [2024-12-09 23:17:26.464527] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:35:48.110 [2024-12-09 23:17:26.464534] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:35:48.110 [2024-12-09 23:17:26.464540] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:35:48.110 [2024-12-09 23:17:26.464549] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:35:48.110 [2024-12-09 23:17:26.464558] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:35:48.111 [2024-12-09 23:17:26.464567] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:35:48.111 [2024-12-09 23:17:26.464576] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:35:48.111 [2024-12-09 23:17:26.464583] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:35:48.111 [2024-12-09 23:17:26.464591] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:35:48.111 [2024-12-09 23:17:26.464599] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:35:48.111 [2024-12-09 23:17:26.464607] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:35:48.111 [2024-12-09 23:17:26.464614] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:35:48.111 [2024-12-09 23:17:26.464621] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:35:48.111 [2024-12-09 23:17:26.464628] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:35:48.111 [2024-12-09 23:17:26.464636] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:35:48.111 [2024-12-09 23:17:26.464644] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:35:48.111 [2024-12-09 23:17:26.464651] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:35:48.111 [2024-12-09 23:17:26.464658] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:35:48.111 [2024-12-09 23:17:26.464665] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:35:48.111 [2024-12-09 23:17:26.464673] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:35:48.111 [2024-12-09 23:17:26.464681] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:35:48.111 [2024-12-09 23:17:26.464690] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:35:48.111 [2024-12-09 23:17:26.464699] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:35:48.111 [2024-12-09 23:17:26.464707] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:35:48.111 [2024-12-09 23:17:26.464714] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:35:48.111 [2024-12-09 23:17:26.464721] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:35:48.111 [2024-12-09 23:17:26.464729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:48.111 [2024-12-09 23:17:26.464741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:35:48.111 [2024-12-09 23:17:26.464749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.688 ms 00:35:48.111 [2024-12-09 23:17:26.464756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:48.111 [2024-12-09 23:17:26.497830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:48.111 [2024-12-09 23:17:26.498077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:35:48.111 [2024-12-09 23:17:26.498099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.012 ms 00:35:48.111 [2024-12-09 23:17:26.498109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:48.111 [2024-12-09 23:17:26.498301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:48.111 [2024-12-09 23:17:26.498315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:35:48.111 [2024-12-09 23:17:26.498326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:35:48.111 [2024-12-09 23:17:26.498334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:48.111 [2024-12-09 23:17:26.542780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:48.111 [2024-12-09 23:17:26.542852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:35:48.111 [2024-12-09 23:17:26.542872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.417 ms 00:35:48.111 [2024-12-09 23:17:26.542882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:48.111 [2024-12-09 23:17:26.543039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:48.111 [2024-12-09 23:17:26.543054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:35:48.111 [2024-12-09 23:17:26.543064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:35:48.111 [2024-12-09 23:17:26.543073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:48.111 [2024-12-09 23:17:26.543720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:48.111 [2024-12-09 23:17:26.543758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:35:48.111 [2024-12-09 23:17:26.543776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.620 ms 00:35:48.111 [2024-12-09 23:17:26.543785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:48.111 [2024-12-09 23:17:26.543938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:48.111 [2024-12-09 23:17:26.543948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:35:48.111 [2024-12-09 23:17:26.543956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.117 ms 00:35:48.111 [2024-12-09 23:17:26.543964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:48.111 [2024-12-09 23:17:26.560780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:48.111 [2024-12-09 23:17:26.560831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:35:48.111 [2024-12-09 23:17:26.560844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.792 ms 00:35:48.111 [2024-12-09 23:17:26.560853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:48.373 [2024-12-09 23:17:26.575535] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:35:48.373 [2024-12-09 23:17:26.575588] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:35:48.373 [2024-12-09 23:17:26.575604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:48.373 [2024-12-09 23:17:26.575613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:35:48.373 [2024-12-09 23:17:26.575624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.606 ms 00:35:48.373 [2024-12-09 23:17:26.575632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:48.373 [2024-12-09 23:17:26.601924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:48.373 [2024-12-09 23:17:26.601978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:35:48.373 [2024-12-09 23:17:26.601992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.184 ms 00:35:48.373 [2024-12-09 23:17:26.602002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:48.373 [2024-12-09 23:17:26.614834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:48.373 [2024-12-09 23:17:26.614881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:35:48.373 [2024-12-09 23:17:26.614894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.727 ms 00:35:48.373 [2024-12-09 23:17:26.614902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:48.373 [2024-12-09 23:17:26.627179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:48.373 [2024-12-09 23:17:26.627257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:35:48.373 [2024-12-09 23:17:26.627270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.184 ms 00:35:48.373 [2024-12-09 23:17:26.627277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:48.373 [2024-12-09 23:17:26.627974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:48.373 [2024-12-09 23:17:26.628000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:35:48.373 [2024-12-09 23:17:26.628011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.572 ms 00:35:48.373 [2024-12-09 23:17:26.628019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:48.373 [2024-12-09 23:17:26.693812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:48.373 [2024-12-09 23:17:26.694099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:35:48.373 [2024-12-09 23:17:26.694126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.764 ms 00:35:48.373 [2024-12-09 23:17:26.694135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:48.373 [2024-12-09 23:17:26.705893] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:35:48.373 [2024-12-09 23:17:26.726490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:48.373 [2024-12-09 23:17:26.726543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:35:48.373 [2024-12-09 23:17:26.726565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.223 ms 00:35:48.373 [2024-12-09 23:17:26.726575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:48.373 [2024-12-09 23:17:26.726684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:48.373 [2024-12-09 23:17:26.726696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:35:48.373 [2024-12-09 23:17:26.726707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:35:48.373 [2024-12-09 23:17:26.726715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:48.373 [2024-12-09 23:17:26.726774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:48.373 [2024-12-09 23:17:26.726786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:35:48.373 [2024-12-09 23:17:26.726800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:35:48.373 [2024-12-09 23:17:26.726811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:48.373 [2024-12-09 23:17:26.726840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:48.373 [2024-12-09 23:17:26.726850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:35:48.373 [2024-12-09 23:17:26.726859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:35:48.373 [2024-12-09 23:17:26.726866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:48.373 [2024-12-09 23:17:26.726905] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:35:48.373 [2024-12-09 23:17:26.726916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:48.373 [2024-12-09 23:17:26.726924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:35:48.373 [2024-12-09 23:17:26.726933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:35:48.373 [2024-12-09 23:17:26.726942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:48.373 [2024-12-09 23:17:26.753645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:48.373 [2024-12-09 23:17:26.753696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:35:48.373 [2024-12-09 23:17:26.753712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.680 ms 00:35:48.373 [2024-12-09 23:17:26.753721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:48.373 [2024-12-09 23:17:26.753840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:48.373 [2024-12-09 23:17:26.753852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:35:48.373 [2024-12-09 23:17:26.753863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:35:48.373 [2024-12-09 23:17:26.753879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:48.373 [2024-12-09 23:17:26.755612] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:35:48.373 [2024-12-09 23:17:26.759303] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 327.163 ms, result 0 00:35:48.373 [2024-12-09 23:17:26.760203] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:35:48.373 [2024-12-09 23:17:26.774010] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:35:48.945  [2024-12-09T23:17:27.407Z] Copying: 4096/4096 [kB] (average 10138 kBps)[2024-12-09 23:17:27.181529] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:35:48.945 [2024-12-09 23:17:27.191792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:48.945 [2024-12-09 23:17:27.192013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:35:48.945 [2024-12-09 23:17:27.192038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:35:48.945 [2024-12-09 23:17:27.192047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:48.945 [2024-12-09 23:17:27.192082] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:35:48.945 [2024-12-09 23:17:27.195077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:48.945 [2024-12-09 23:17:27.195274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:35:48.945 [2024-12-09 23:17:27.195297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.979 ms 00:35:48.945 [2024-12-09 23:17:27.195307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:48.945 [2024-12-09 23:17:27.198415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:48.945 [2024-12-09 23:17:27.198457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:35:48.945 [2024-12-09 23:17:27.198467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.072 ms 00:35:48.945 [2024-12-09 23:17:27.198480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:48.945 [2024-12-09 23:17:27.203068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:48.945 [2024-12-09 23:17:27.203108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:35:48.945 [2024-12-09 23:17:27.203120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.569 ms 00:35:48.945 [2024-12-09 23:17:27.203128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:48.945 [2024-12-09 23:17:27.210087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:48.945 [2024-12-09 23:17:27.210291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:35:48.945 [2024-12-09 23:17:27.210314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.925 ms 00:35:48.945 [2024-12-09 23:17:27.210323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:48.945 [2024-12-09 23:17:27.237214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:48.945 [2024-12-09 23:17:27.237437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:35:48.945 [2024-12-09 23:17:27.237461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.829 ms 00:35:48.945 [2024-12-09 23:17:27.237469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:48.945 [2024-12-09 23:17:27.254027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:48.945 [2024-12-09 23:17:27.254084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:35:48.945 [2024-12-09 23:17:27.254099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.329 ms 00:35:48.945 [2024-12-09 23:17:27.254109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:48.945 [2024-12-09 23:17:27.254306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:48.945 [2024-12-09 23:17:27.254321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:35:48.945 [2024-12-09 23:17:27.254340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.127 ms 00:35:48.945 [2024-12-09 23:17:27.254349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:48.945 [2024-12-09 23:17:27.280733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:48.945 [2024-12-09 23:17:27.280930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:35:48.945 [2024-12-09 23:17:27.280951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.364 ms 00:35:48.945 [2024-12-09 23:17:27.280960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:48.945 [2024-12-09 23:17:27.306946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:48.945 [2024-12-09 23:17:27.306996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:35:48.945 [2024-12-09 23:17:27.307010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.823 ms 00:35:48.945 [2024-12-09 23:17:27.307019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:48.945 [2024-12-09 23:17:27.332460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:48.946 [2024-12-09 23:17:27.332519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:35:48.946 [2024-12-09 23:17:27.332534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.387 ms 00:35:48.946 [2024-12-09 23:17:27.332542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:48.946 [2024-12-09 23:17:27.358532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:48.946 [2024-12-09 23:17:27.358581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:35:48.946 [2024-12-09 23:17:27.358595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.886 ms 00:35:48.946 [2024-12-09 23:17:27.358603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:48.946 [2024-12-09 23:17:27.358655] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:35:48.946 [2024-12-09 23:17:27.358672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:35:48.946 [2024-12-09 23:17:27.358683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:35:48.946 [2024-12-09 23:17:27.358692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:35:48.946 [2024-12-09 23:17:27.358700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:35:48.946 [2024-12-09 23:17:27.358709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:35:48.946 [2024-12-09 23:17:27.358717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:35:48.946 [2024-12-09 23:17:27.358725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:35:48.946 [2024-12-09 23:17:27.358733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:35:48.946 [2024-12-09 23:17:27.358741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:35:48.946 [2024-12-09 23:17:27.358750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:35:48.946 [2024-12-09 23:17:27.358758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:35:48.946 [2024-12-09 23:17:27.358765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:35:48.946 [2024-12-09 23:17:27.358773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:35:48.946 [2024-12-09 23:17:27.358781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:35:48.946 [2024-12-09 23:17:27.358789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:35:48.946 [2024-12-09 23:17:27.358796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:35:48.946 [2024-12-09 23:17:27.358803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:35:48.946 [2024-12-09 23:17:27.358811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:35:48.946 [2024-12-09 23:17:27.358819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:35:48.946 [2024-12-09 23:17:27.358826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:35:48.946 [2024-12-09 23:17:27.358834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:35:48.946 [2024-12-09 23:17:27.358842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:35:48.946 [2024-12-09 23:17:27.358850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:35:48.946 [2024-12-09 23:17:27.358858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:35:48.946 [2024-12-09 23:17:27.358866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:35:48.946 [2024-12-09 23:17:27.358874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:35:48.946 [2024-12-09 23:17:27.358883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:35:48.946 [2024-12-09 23:17:27.358890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:35:48.946 [2024-12-09 23:17:27.358898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:35:48.946 [2024-12-09 23:17:27.358905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:35:48.946 [2024-12-09 23:17:27.358915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:35:48.946 [2024-12-09 23:17:27.358923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:35:48.946 [2024-12-09 23:17:27.358930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:35:48.946 [2024-12-09 23:17:27.358938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:35:48.946 [2024-12-09 23:17:27.358945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:35:48.946 [2024-12-09 23:17:27.358953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:35:48.946 [2024-12-09 23:17:27.358960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:35:48.946 [2024-12-09 23:17:27.358968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:35:48.946 [2024-12-09 23:17:27.358976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:35:48.946 [2024-12-09 23:17:27.358983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:35:48.946 [2024-12-09 23:17:27.358991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:35:48.946 [2024-12-09 23:17:27.358999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:35:48.946 [2024-12-09 23:17:27.359007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:35:48.946 [2024-12-09 23:17:27.359014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:35:48.946 [2024-12-09 23:17:27.359021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:35:48.946 [2024-12-09 23:17:27.359029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:35:48.946 [2024-12-09 23:17:27.359036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:35:48.946 [2024-12-09 23:17:27.359043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:35:48.946 [2024-12-09 23:17:27.359052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:35:48.946 [2024-12-09 23:17:27.359060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:35:48.946 [2024-12-09 23:17:27.359067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:35:48.946 [2024-12-09 23:17:27.359075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:35:48.946 [2024-12-09 23:17:27.359083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:35:48.946 [2024-12-09 23:17:27.359090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:35:48.946 [2024-12-09 23:17:27.359098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:35:48.946 [2024-12-09 23:17:27.359107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:35:48.946 [2024-12-09 23:17:27.359115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:35:48.946 [2024-12-09 23:17:27.359123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:35:48.946 [2024-12-09 23:17:27.359130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:35:48.946 [2024-12-09 23:17:27.359138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:35:48.946 [2024-12-09 23:17:27.359145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:35:48.946 [2024-12-09 23:17:27.359152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:35:48.946 [2024-12-09 23:17:27.359160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:35:48.946 [2024-12-09 23:17:27.359169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:35:48.946 [2024-12-09 23:17:27.359177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:35:48.946 [2024-12-09 23:17:27.359185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:35:48.946 [2024-12-09 23:17:27.359192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:35:48.946 [2024-12-09 23:17:27.359200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:35:48.946 [2024-12-09 23:17:27.359208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:35:48.946 [2024-12-09 23:17:27.359230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:35:48.946 [2024-12-09 23:17:27.359238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:35:48.946 [2024-12-09 23:17:27.359246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:35:48.946 [2024-12-09 23:17:27.359253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:35:48.946 [2024-12-09 23:17:27.359261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:35:48.946 [2024-12-09 23:17:27.359269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:35:48.946 [2024-12-09 23:17:27.359276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:35:48.946 [2024-12-09 23:17:27.359284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:35:48.946 [2024-12-09 23:17:27.359292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:35:48.946 [2024-12-09 23:17:27.359300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:35:48.946 [2024-12-09 23:17:27.359308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:35:48.946 [2024-12-09 23:17:27.359315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:35:48.946 [2024-12-09 23:17:27.359323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:35:48.946 [2024-12-09 23:17:27.359332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:35:48.946 [2024-12-09 23:17:27.359341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:35:48.946 [2024-12-09 23:17:27.359350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:35:48.947 [2024-12-09 23:17:27.359358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:35:48.947 [2024-12-09 23:17:27.359367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:35:48.947 [2024-12-09 23:17:27.359375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:35:48.947 [2024-12-09 23:17:27.359383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:35:48.947 [2024-12-09 23:17:27.359390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:35:48.947 [2024-12-09 23:17:27.359398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:35:48.947 [2024-12-09 23:17:27.359405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:35:48.947 [2024-12-09 23:17:27.359414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:35:48.947 [2024-12-09 23:17:27.359432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:35:48.947 [2024-12-09 23:17:27.359444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:35:48.947 [2024-12-09 23:17:27.359452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:35:48.947 [2024-12-09 23:17:27.359460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:35:48.947 [2024-12-09 23:17:27.359468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:35:48.947 [2024-12-09 23:17:27.359476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:35:48.947 [2024-12-09 23:17:27.359485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:35:48.947 [2024-12-09 23:17:27.359501] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:35:48.947 [2024-12-09 23:17:27.359510] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 8d08cbd7-a528-4f3d-b495-445a47785ac7 00:35:48.947 [2024-12-09 23:17:27.359519] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:35:48.947 [2024-12-09 23:17:27.359527] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:35:48.947 [2024-12-09 23:17:27.359534] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:35:48.947 [2024-12-09 23:17:27.359543] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:35:48.947 [2024-12-09 23:17:27.359551] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:35:48.947 [2024-12-09 23:17:27.359563] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:35:48.947 [2024-12-09 23:17:27.359571] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:35:48.947 [2024-12-09 23:17:27.359578] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:35:48.947 [2024-12-09 23:17:27.359584] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:35:48.947 [2024-12-09 23:17:27.359592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:48.947 [2024-12-09 23:17:27.359600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:35:48.947 [2024-12-09 23:17:27.359609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.938 ms 00:35:48.947 [2024-12-09 23:17:27.359616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:48.947 [2024-12-09 23:17:27.373248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:48.947 [2024-12-09 23:17:27.373284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:35:48.947 [2024-12-09 23:17:27.373296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.591 ms 00:35:48.947 [2024-12-09 23:17:27.373310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:48.947 [2024-12-09 23:17:27.373738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:48.947 [2024-12-09 23:17:27.373750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:35:48.947 [2024-12-09 23:17:27.373759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.381 ms 00:35:48.947 [2024-12-09 23:17:27.373767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:49.207 [2024-12-09 23:17:27.412438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:49.207 [2024-12-09 23:17:27.412495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:35:49.207 [2024-12-09 23:17:27.412513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:49.207 [2024-12-09 23:17:27.412522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:49.207 [2024-12-09 23:17:27.412627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:49.207 [2024-12-09 23:17:27.412638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:35:49.207 [2024-12-09 23:17:27.412647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:49.207 [2024-12-09 23:17:27.412655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:49.207 [2024-12-09 23:17:27.412712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:49.207 [2024-12-09 23:17:27.412722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:35:49.207 [2024-12-09 23:17:27.412731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:49.207 [2024-12-09 23:17:27.412744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:49.207 [2024-12-09 23:17:27.412763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:49.207 [2024-12-09 23:17:27.412771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:35:49.207 [2024-12-09 23:17:27.412779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:49.207 [2024-12-09 23:17:27.412786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:49.207 [2024-12-09 23:17:27.499187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:49.207 [2024-12-09 23:17:27.499264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:35:49.207 [2024-12-09 23:17:27.499278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:49.207 [2024-12-09 23:17:27.499294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:49.207 [2024-12-09 23:17:27.571858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:49.207 [2024-12-09 23:17:27.571925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:35:49.207 [2024-12-09 23:17:27.571941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:49.207 [2024-12-09 23:17:27.571951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:49.207 [2024-12-09 23:17:27.572034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:49.207 [2024-12-09 23:17:27.572046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:35:49.207 [2024-12-09 23:17:27.572056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:49.207 [2024-12-09 23:17:27.572065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:49.207 [2024-12-09 23:17:27.572109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:49.207 [2024-12-09 23:17:27.572119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:35:49.207 [2024-12-09 23:17:27.572128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:49.207 [2024-12-09 23:17:27.572136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:49.207 [2024-12-09 23:17:27.572267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:49.207 [2024-12-09 23:17:27.572279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:35:49.207 [2024-12-09 23:17:27.572288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:49.207 [2024-12-09 23:17:27.572296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:49.207 [2024-12-09 23:17:27.572331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:49.207 [2024-12-09 23:17:27.572345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:35:49.207 [2024-12-09 23:17:27.572353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:49.207 [2024-12-09 23:17:27.572361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:49.207 [2024-12-09 23:17:27.572406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:49.207 [2024-12-09 23:17:27.572416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:35:49.207 [2024-12-09 23:17:27.572425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:49.207 [2024-12-09 23:17:27.572433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:49.207 [2024-12-09 23:17:27.572486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:49.207 [2024-12-09 23:17:27.572497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:35:49.207 [2024-12-09 23:17:27.572506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:49.207 [2024-12-09 23:17:27.572514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:49.207 [2024-12-09 23:17:27.572678] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 380.871 ms, result 0 00:35:50.152 00:35:50.152 00:35:50.152 23:17:28 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=77140 00:35:50.152 23:17:28 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 77140 00:35:50.152 23:17:28 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:35:50.152 23:17:28 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 77140 ']' 00:35:50.152 23:17:28 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:50.152 23:17:28 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:50.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:50.152 23:17:28 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:50.152 23:17:28 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:50.152 23:17:28 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:35:50.152 [2024-12-09 23:17:28.463342] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:35:50.152 [2024-12-09 23:17:28.463489] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77140 ] 00:35:50.412 [2024-12-09 23:17:28.624515] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:50.412 [2024-12-09 23:17:28.757026] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:51.355 23:17:29 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:51.356 23:17:29 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:35:51.356 23:17:29 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:35:51.356 [2024-12-09 23:17:29.698860] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:35:51.356 [2024-12-09 23:17:29.698956] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:35:51.619 [2024-12-09 23:17:29.878997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:51.619 [2024-12-09 23:17:29.879067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:35:51.619 [2024-12-09 23:17:29.879085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:35:51.619 [2024-12-09 23:17:29.879094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:51.619 [2024-12-09 23:17:29.882141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:51.619 [2024-12-09 23:17:29.882198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:35:51.619 [2024-12-09 23:17:29.882212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.023 ms 00:35:51.619 [2024-12-09 23:17:29.882238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:51.619 [2024-12-09 23:17:29.882369] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:35:51.619 [2024-12-09 23:17:29.883083] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:35:51.619 [2024-12-09 23:17:29.883117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:51.619 [2024-12-09 23:17:29.883126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:35:51.619 [2024-12-09 23:17:29.883137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.762 ms 00:35:51.619 [2024-12-09 23:17:29.883146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:51.619 [2024-12-09 23:17:29.885024] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:35:51.619 [2024-12-09 23:17:29.899757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:51.619 [2024-12-09 23:17:29.899823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:35:51.619 [2024-12-09 23:17:29.899839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.737 ms 00:35:51.619 [2024-12-09 23:17:29.899850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:51.619 [2024-12-09 23:17:29.899978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:51.619 [2024-12-09 23:17:29.899993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:35:51.619 [2024-12-09 23:17:29.900002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:35:51.619 [2024-12-09 23:17:29.900012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:51.619 [2024-12-09 23:17:29.908765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:51.619 [2024-12-09 23:17:29.908822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:35:51.619 [2024-12-09 23:17:29.908834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.697 ms 00:35:51.619 [2024-12-09 23:17:29.908844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:51.619 [2024-12-09 23:17:29.908971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:51.620 [2024-12-09 23:17:29.908985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:35:51.620 [2024-12-09 23:17:29.908994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:35:51.620 [2024-12-09 23:17:29.909007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:51.620 [2024-12-09 23:17:29.909035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:51.620 [2024-12-09 23:17:29.909046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:35:51.620 [2024-12-09 23:17:29.909054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:35:51.620 [2024-12-09 23:17:29.909063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:51.620 [2024-12-09 23:17:29.909087] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:35:51.620 [2024-12-09 23:17:29.913505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:51.620 [2024-12-09 23:17:29.913554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:35:51.620 [2024-12-09 23:17:29.913568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.421 ms 00:35:51.620 [2024-12-09 23:17:29.913577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:51.620 [2024-12-09 23:17:29.913673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:51.620 [2024-12-09 23:17:29.913683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:35:51.620 [2024-12-09 23:17:29.913694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:35:51.620 [2024-12-09 23:17:29.913705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:51.620 [2024-12-09 23:17:29.913728] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:35:51.620 [2024-12-09 23:17:29.913752] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:35:51.620 [2024-12-09 23:17:29.913801] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:35:51.620 [2024-12-09 23:17:29.913818] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:35:51.620 [2024-12-09 23:17:29.913928] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:35:51.620 [2024-12-09 23:17:29.913940] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:35:51.620 [2024-12-09 23:17:29.913956] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:35:51.620 [2024-12-09 23:17:29.913967] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:35:51.620 [2024-12-09 23:17:29.913977] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:35:51.620 [2024-12-09 23:17:29.913986] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:35:51.620 [2024-12-09 23:17:29.913996] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:35:51.620 [2024-12-09 23:17:29.914003] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:35:51.620 [2024-12-09 23:17:29.914015] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:35:51.620 [2024-12-09 23:17:29.914023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:51.620 [2024-12-09 23:17:29.914032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:35:51.620 [2024-12-09 23:17:29.914040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.299 ms 00:35:51.620 [2024-12-09 23:17:29.914049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:51.620 [2024-12-09 23:17:29.914140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:51.620 [2024-12-09 23:17:29.914150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:35:51.620 [2024-12-09 23:17:29.914158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:35:51.620 [2024-12-09 23:17:29.914167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:51.620 [2024-12-09 23:17:29.914289] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:35:51.620 [2024-12-09 23:17:29.914310] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:35:51.620 [2024-12-09 23:17:29.914320] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:35:51.620 [2024-12-09 23:17:29.914330] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:51.620 [2024-12-09 23:17:29.914340] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:35:51.620 [2024-12-09 23:17:29.914351] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:35:51.620 [2024-12-09 23:17:29.914358] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:35:51.620 [2024-12-09 23:17:29.914369] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:35:51.620 [2024-12-09 23:17:29.914377] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:35:51.620 [2024-12-09 23:17:29.914385] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:35:51.620 [2024-12-09 23:17:29.914391] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:35:51.620 [2024-12-09 23:17:29.914400] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:35:51.620 [2024-12-09 23:17:29.914406] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:35:51.620 [2024-12-09 23:17:29.914416] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:35:51.620 [2024-12-09 23:17:29.914425] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:35:51.620 [2024-12-09 23:17:29.914434] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:51.620 [2024-12-09 23:17:29.914442] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:35:51.620 [2024-12-09 23:17:29.914451] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:35:51.620 [2024-12-09 23:17:29.914465] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:51.620 [2024-12-09 23:17:29.914475] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:35:51.620 [2024-12-09 23:17:29.914481] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:35:51.620 [2024-12-09 23:17:29.914490] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:35:51.620 [2024-12-09 23:17:29.914497] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:35:51.620 [2024-12-09 23:17:29.914507] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:35:51.620 [2024-12-09 23:17:29.914513] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:35:51.620 [2024-12-09 23:17:29.914522] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:35:51.620 [2024-12-09 23:17:29.914528] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:35:51.620 [2024-12-09 23:17:29.914536] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:35:51.620 [2024-12-09 23:17:29.914543] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:35:51.620 [2024-12-09 23:17:29.914552] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:35:51.620 [2024-12-09 23:17:29.914559] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:35:51.620 [2024-12-09 23:17:29.914567] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:35:51.620 [2024-12-09 23:17:29.914573] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:35:51.620 [2024-12-09 23:17:29.914582] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:35:51.620 [2024-12-09 23:17:29.914589] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:35:51.620 [2024-12-09 23:17:29.914601] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:35:51.620 [2024-12-09 23:17:29.914607] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:35:51.620 [2024-12-09 23:17:29.914616] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:35:51.620 [2024-12-09 23:17:29.914623] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:35:51.620 [2024-12-09 23:17:29.914633] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:51.620 [2024-12-09 23:17:29.914640] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:35:51.620 [2024-12-09 23:17:29.914648] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:35:51.620 [2024-12-09 23:17:29.914654] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:51.620 [2024-12-09 23:17:29.914662] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:35:51.620 [2024-12-09 23:17:29.914671] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:35:51.620 [2024-12-09 23:17:29.914684] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:35:51.620 [2024-12-09 23:17:29.914693] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:51.620 [2024-12-09 23:17:29.914703] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:35:51.620 [2024-12-09 23:17:29.914710] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:35:51.620 [2024-12-09 23:17:29.914719] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:35:51.620 [2024-12-09 23:17:29.914726] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:35:51.620 [2024-12-09 23:17:29.914735] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:35:51.620 [2024-12-09 23:17:29.914742] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:35:51.620 [2024-12-09 23:17:29.914752] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:35:51.620 [2024-12-09 23:17:29.914762] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:35:51.620 [2024-12-09 23:17:29.914776] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:35:51.620 [2024-12-09 23:17:29.914783] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:35:51.620 [2024-12-09 23:17:29.914793] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:35:51.620 [2024-12-09 23:17:29.914800] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:35:51.620 [2024-12-09 23:17:29.914809] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:35:51.620 [2024-12-09 23:17:29.914815] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:35:51.620 [2024-12-09 23:17:29.914824] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:35:51.620 [2024-12-09 23:17:29.914832] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:35:51.620 [2024-12-09 23:17:29.914840] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:35:51.620 [2024-12-09 23:17:29.914847] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:35:51.620 [2024-12-09 23:17:29.914857] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:35:51.621 [2024-12-09 23:17:29.914865] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:35:51.621 [2024-12-09 23:17:29.914875] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:35:51.621 [2024-12-09 23:17:29.914882] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:35:51.621 [2024-12-09 23:17:29.914891] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:35:51.621 [2024-12-09 23:17:29.914900] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:35:51.621 [2024-12-09 23:17:29.914911] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:35:51.621 [2024-12-09 23:17:29.914919] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:35:51.621 [2024-12-09 23:17:29.914928] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:35:51.621 [2024-12-09 23:17:29.914936] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:35:51.621 [2024-12-09 23:17:29.914945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:51.621 [2024-12-09 23:17:29.914953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:35:51.621 [2024-12-09 23:17:29.914963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.746 ms 00:35:51.621 [2024-12-09 23:17:29.914974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:51.621 [2024-12-09 23:17:29.947930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:51.621 [2024-12-09 23:17:29.947990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:35:51.621 [2024-12-09 23:17:29.948004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.892 ms 00:35:51.621 [2024-12-09 23:17:29.948015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:51.621 [2024-12-09 23:17:29.948156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:51.621 [2024-12-09 23:17:29.948167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:35:51.621 [2024-12-09 23:17:29.948178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:35:51.621 [2024-12-09 23:17:29.948187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:51.621 [2024-12-09 23:17:29.984032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:51.621 [2024-12-09 23:17:29.984091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:35:51.621 [2024-12-09 23:17:29.984105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.817 ms 00:35:51.621 [2024-12-09 23:17:29.984113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:51.621 [2024-12-09 23:17:29.984207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:51.621 [2024-12-09 23:17:29.984231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:35:51.621 [2024-12-09 23:17:29.984243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:35:51.621 [2024-12-09 23:17:29.984252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:51.621 [2024-12-09 23:17:29.984833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:51.621 [2024-12-09 23:17:29.984874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:35:51.621 [2024-12-09 23:17:29.984888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.552 ms 00:35:51.621 [2024-12-09 23:17:29.984897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:51.621 [2024-12-09 23:17:29.985052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:51.621 [2024-12-09 23:17:29.985062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:35:51.621 [2024-12-09 23:17:29.985073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.124 ms 00:35:51.621 [2024-12-09 23:17:29.985080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:51.621 [2024-12-09 23:17:30.003609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:51.621 [2024-12-09 23:17:30.003662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:35:51.621 [2024-12-09 23:17:30.003677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.501 ms 00:35:51.621 [2024-12-09 23:17:30.003685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:51.621 [2024-12-09 23:17:30.026104] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:35:51.621 [2024-12-09 23:17:30.026166] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:35:51.621 [2024-12-09 23:17:30.026186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:51.621 [2024-12-09 23:17:30.026195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:35:51.621 [2024-12-09 23:17:30.026207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.377 ms 00:35:51.621 [2024-12-09 23:17:30.026237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:51.621 [2024-12-09 23:17:30.052829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:51.621 [2024-12-09 23:17:30.052891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:35:51.621 [2024-12-09 23:17:30.052908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.476 ms 00:35:51.621 [2024-12-09 23:17:30.052919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:51.621 [2024-12-09 23:17:30.066415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:51.621 [2024-12-09 23:17:30.066471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:35:51.621 [2024-12-09 23:17:30.066490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.378 ms 00:35:51.621 [2024-12-09 23:17:30.066498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:51.882 [2024-12-09 23:17:30.079842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:51.882 [2024-12-09 23:17:30.079893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:35:51.882 [2024-12-09 23:17:30.079909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.237 ms 00:35:51.882 [2024-12-09 23:17:30.079918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:51.882 [2024-12-09 23:17:30.080654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:51.882 [2024-12-09 23:17:30.080687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:35:51.882 [2024-12-09 23:17:30.080701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.605 ms 00:35:51.882 [2024-12-09 23:17:30.080709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:51.882 [2024-12-09 23:17:30.148026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:51.882 [2024-12-09 23:17:30.148108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:35:51.882 [2024-12-09 23:17:30.148128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 67.281 ms 00:35:51.882 [2024-12-09 23:17:30.148137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:51.882 [2024-12-09 23:17:30.160053] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:35:51.882 [2024-12-09 23:17:30.181082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:51.882 [2024-12-09 23:17:30.181158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:35:51.882 [2024-12-09 23:17:30.181175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.790 ms 00:35:51.882 [2024-12-09 23:17:30.181188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:51.882 [2024-12-09 23:17:30.181313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:51.882 [2024-12-09 23:17:30.181328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:35:51.882 [2024-12-09 23:17:30.181338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:35:51.882 [2024-12-09 23:17:30.181348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:51.882 [2024-12-09 23:17:30.181406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:51.882 [2024-12-09 23:17:30.181433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:35:51.882 [2024-12-09 23:17:30.181441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:35:51.882 [2024-12-09 23:17:30.181454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:51.882 [2024-12-09 23:17:30.181481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:51.882 [2024-12-09 23:17:30.181493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:35:51.882 [2024-12-09 23:17:30.181501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:35:51.882 [2024-12-09 23:17:30.181515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:51.882 [2024-12-09 23:17:30.181552] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:35:51.882 [2024-12-09 23:17:30.181566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:51.882 [2024-12-09 23:17:30.181577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:35:51.882 [2024-12-09 23:17:30.181587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:35:51.882 [2024-12-09 23:17:30.181595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:51.882 [2024-12-09 23:17:30.209012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:51.882 [2024-12-09 23:17:30.209070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:35:51.882 [2024-12-09 23:17:30.209087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.383 ms 00:35:51.882 [2024-12-09 23:17:30.209096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:51.882 [2024-12-09 23:17:30.209249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:51.882 [2024-12-09 23:17:30.209263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:35:51.882 [2024-12-09 23:17:30.209275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:35:51.882 [2024-12-09 23:17:30.209287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:51.882 [2024-12-09 23:17:30.210443] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:35:51.882 [2024-12-09 23:17:30.213985] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 331.079 ms, result 0 00:35:51.882 [2024-12-09 23:17:30.216392] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:35:51.882 Some configs were skipped because the RPC state that can call them passed over. 00:35:51.882 23:17:30 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:35:52.144 [2024-12-09 23:17:30.467334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:52.144 [2024-12-09 23:17:30.467412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:35:52.144 [2024-12-09 23:17:30.467428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.583 ms 00:35:52.144 [2024-12-09 23:17:30.467440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:52.144 [2024-12-09 23:17:30.467495] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.753 ms, result 0 00:35:52.144 true 00:35:52.144 23:17:30 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:35:52.406 [2024-12-09 23:17:30.683081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:52.406 [2024-12-09 23:17:30.683161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:35:52.406 [2024-12-09 23:17:30.683177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.999 ms 00:35:52.406 [2024-12-09 23:17:30.683186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:52.406 [2024-12-09 23:17:30.683245] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.157 ms, result 0 00:35:52.406 true 00:35:52.406 23:17:30 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 77140 00:35:52.406 23:17:30 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 77140 ']' 00:35:52.406 23:17:30 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 77140 00:35:52.406 23:17:30 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:35:52.406 23:17:30 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:52.406 23:17:30 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77140 00:35:52.406 killing process with pid 77140 00:35:52.406 23:17:30 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:52.406 23:17:30 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:52.406 23:17:30 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77140' 00:35:52.406 23:17:30 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 77140 00:35:52.406 23:17:30 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 77140 00:35:53.351 [2024-12-09 23:17:31.505674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:53.351 [2024-12-09 23:17:31.505749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:35:53.351 [2024-12-09 23:17:31.505764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:35:53.351 [2024-12-09 23:17:31.505775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:53.351 [2024-12-09 23:17:31.505801] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:35:53.351 [2024-12-09 23:17:31.508854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:53.351 [2024-12-09 23:17:31.508899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:35:53.351 [2024-12-09 23:17:31.508917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.031 ms 00:35:53.351 [2024-12-09 23:17:31.508925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:53.351 [2024-12-09 23:17:31.509247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:53.351 [2024-12-09 23:17:31.509266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:35:53.351 [2024-12-09 23:17:31.509279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.271 ms 00:35:53.351 [2024-12-09 23:17:31.509287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:53.351 [2024-12-09 23:17:31.515901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:53.351 [2024-12-09 23:17:31.515953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:35:53.351 [2024-12-09 23:17:31.515970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.586 ms 00:35:53.351 [2024-12-09 23:17:31.515977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:53.351 [2024-12-09 23:17:31.523046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:53.351 [2024-12-09 23:17:31.523099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:35:53.351 [2024-12-09 23:17:31.523117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.013 ms 00:35:53.351 [2024-12-09 23:17:31.523125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:53.351 [2024-12-09 23:17:31.534254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:53.352 [2024-12-09 23:17:31.534312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:35:53.352 [2024-12-09 23:17:31.534328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.053 ms 00:35:53.352 [2024-12-09 23:17:31.534336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:53.352 [2024-12-09 23:17:31.542919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:53.352 [2024-12-09 23:17:31.542972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:35:53.352 [2024-12-09 23:17:31.542986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.521 ms 00:35:53.352 [2024-12-09 23:17:31.542993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:53.352 [2024-12-09 23:17:31.543161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:53.352 [2024-12-09 23:17:31.543172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:35:53.352 [2024-12-09 23:17:31.543184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:35:53.352 [2024-12-09 23:17:31.543192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:53.352 [2024-12-09 23:17:31.554633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:53.352 [2024-12-09 23:17:31.554683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:35:53.352 [2024-12-09 23:17:31.554697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.412 ms 00:35:53.352 [2024-12-09 23:17:31.554704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:53.352 [2024-12-09 23:17:31.565975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:53.352 [2024-12-09 23:17:31.566020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:35:53.352 [2024-12-09 23:17:31.566040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.209 ms 00:35:53.352 [2024-12-09 23:17:31.566047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:53.352 [2024-12-09 23:17:31.576406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:53.352 [2024-12-09 23:17:31.576453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:35:53.352 [2024-12-09 23:17:31.576466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.298 ms 00:35:53.352 [2024-12-09 23:17:31.576474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:53.352 [2024-12-09 23:17:31.587228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:53.352 [2024-12-09 23:17:31.587274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:35:53.352 [2024-12-09 23:17:31.587287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.656 ms 00:35:53.352 [2024-12-09 23:17:31.587295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:53.352 [2024-12-09 23:17:31.587348] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:35:53.352 [2024-12-09 23:17:31.587363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:35:53.352 [2024-12-09 23:17:31.587375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:35:53.352 [2024-12-09 23:17:31.587383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:35:53.352 [2024-12-09 23:17:31.587392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:35:53.352 [2024-12-09 23:17:31.587400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:35:53.352 [2024-12-09 23:17:31.587413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:35:53.352 [2024-12-09 23:17:31.587420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:35:53.352 [2024-12-09 23:17:31.587430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:35:53.352 [2024-12-09 23:17:31.587438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:35:53.352 [2024-12-09 23:17:31.587447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:35:53.352 [2024-12-09 23:17:31.587455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:35:53.352 [2024-12-09 23:17:31.587465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:35:53.352 [2024-12-09 23:17:31.587472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:35:53.352 [2024-12-09 23:17:31.587481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:35:53.352 [2024-12-09 23:17:31.587488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:35:53.352 [2024-12-09 23:17:31.587499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:35:53.352 [2024-12-09 23:17:31.587507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:35:53.352 [2024-12-09 23:17:31.587518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:35:53.352 [2024-12-09 23:17:31.587525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:35:53.352 [2024-12-09 23:17:31.587536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:35:53.352 [2024-12-09 23:17:31.587543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:35:53.352 [2024-12-09 23:17:31.587555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:35:53.352 [2024-12-09 23:17:31.587562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:35:53.352 [2024-12-09 23:17:31.587572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:35:53.352 [2024-12-09 23:17:31.587579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:35:53.352 [2024-12-09 23:17:31.587589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:35:53.352 [2024-12-09 23:17:31.587596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:35:53.352 [2024-12-09 23:17:31.587605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:35:53.352 [2024-12-09 23:17:31.587613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:35:53.352 [2024-12-09 23:17:31.587624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:35:53.352 [2024-12-09 23:17:31.587632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:35:53.352 [2024-12-09 23:17:31.587641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:35:53.352 [2024-12-09 23:17:31.587648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:35:53.352 [2024-12-09 23:17:31.587658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:35:53.352 [2024-12-09 23:17:31.587666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:35:53.352 [2024-12-09 23:17:31.587675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:35:53.352 [2024-12-09 23:17:31.587683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:35:53.352 [2024-12-09 23:17:31.587693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:35:53.352 [2024-12-09 23:17:31.587700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:35:53.352 [2024-12-09 23:17:31.587709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:35:53.352 [2024-12-09 23:17:31.587717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:35:53.352 [2024-12-09 23:17:31.587727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:35:53.352 [2024-12-09 23:17:31.587734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:35:53.352 [2024-12-09 23:17:31.587743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:35:53.352 [2024-12-09 23:17:31.587751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:35:53.352 [2024-12-09 23:17:31.587760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:35:53.352 [2024-12-09 23:17:31.587767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:35:53.352 [2024-12-09 23:17:31.587777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:35:53.352 [2024-12-09 23:17:31.587784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:35:53.352 [2024-12-09 23:17:31.587793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:35:53.352 [2024-12-09 23:17:31.587800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:35:53.352 [2024-12-09 23:17:31.587810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:35:53.352 [2024-12-09 23:17:31.587817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:35:53.352 [2024-12-09 23:17:31.587828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:35:53.352 [2024-12-09 23:17:31.587836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:35:53.352 [2024-12-09 23:17:31.587845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:35:53.352 [2024-12-09 23:17:31.587853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:35:53.352 [2024-12-09 23:17:31.587862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:35:53.352 [2024-12-09 23:17:31.587869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:35:53.352 [2024-12-09 23:17:31.587878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:35:53.352 [2024-12-09 23:17:31.587887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:35:53.352 [2024-12-09 23:17:31.587897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:35:53.352 [2024-12-09 23:17:31.587905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:35:53.352 [2024-12-09 23:17:31.587915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:35:53.352 [2024-12-09 23:17:31.587922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:35:53.352 [2024-12-09 23:17:31.587932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:35:53.352 [2024-12-09 23:17:31.587939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:35:53.352 [2024-12-09 23:17:31.587948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:35:53.352 [2024-12-09 23:17:31.587956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:35:53.353 [2024-12-09 23:17:31.587968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:35:53.353 [2024-12-09 23:17:31.587975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:35:53.353 [2024-12-09 23:17:31.587985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:35:53.353 [2024-12-09 23:17:31.587993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:35:53.353 [2024-12-09 23:17:31.588002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:35:53.353 [2024-12-09 23:17:31.588009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:35:53.353 [2024-12-09 23:17:31.588018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:35:53.353 [2024-12-09 23:17:31.588027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:35:53.353 [2024-12-09 23:17:31.588037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:35:53.353 [2024-12-09 23:17:31.588045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:35:53.353 [2024-12-09 23:17:31.588054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:35:53.353 [2024-12-09 23:17:31.588061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:35:53.353 [2024-12-09 23:17:31.588070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:35:53.353 [2024-12-09 23:17:31.588077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:35:53.353 [2024-12-09 23:17:31.588086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:35:53.353 [2024-12-09 23:17:31.588093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:35:53.353 [2024-12-09 23:17:31.588104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:35:53.353 [2024-12-09 23:17:31.588111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:35:53.353 [2024-12-09 23:17:31.588121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:35:53.353 [2024-12-09 23:17:31.588128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:35:53.353 [2024-12-09 23:17:31.588137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:35:53.353 [2024-12-09 23:17:31.588144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:35:53.353 [2024-12-09 23:17:31.588155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:35:53.353 [2024-12-09 23:17:31.588163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:35:53.353 [2024-12-09 23:17:31.588173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:35:53.353 [2024-12-09 23:17:31.588180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:35:53.353 [2024-12-09 23:17:31.588192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:35:53.353 [2024-12-09 23:17:31.588200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:35:53.353 [2024-12-09 23:17:31.588209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:35:53.353 [2024-12-09 23:17:31.588231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:35:53.353 [2024-12-09 23:17:31.588242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:35:53.353 [2024-12-09 23:17:31.588266] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:35:53.353 [2024-12-09 23:17:31.588282] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 8d08cbd7-a528-4f3d-b495-445a47785ac7 00:35:53.353 [2024-12-09 23:17:31.588293] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:35:53.353 [2024-12-09 23:17:31.588303] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:35:53.353 [2024-12-09 23:17:31.588311] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:35:53.353 [2024-12-09 23:17:31.588321] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:35:53.353 [2024-12-09 23:17:31.588328] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:35:53.353 [2024-12-09 23:17:31.588339] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:35:53.353 [2024-12-09 23:17:31.588346] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:35:53.353 [2024-12-09 23:17:31.588354] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:35:53.353 [2024-12-09 23:17:31.588360] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:35:53.353 [2024-12-09 23:17:31.588369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:53.353 [2024-12-09 23:17:31.588377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:35:53.353 [2024-12-09 23:17:31.588388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.024 ms 00:35:53.353 [2024-12-09 23:17:31.588396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:53.353 [2024-12-09 23:17:31.602354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:53.353 [2024-12-09 23:17:31.602403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:35:53.353 [2024-12-09 23:17:31.602419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.909 ms 00:35:53.353 [2024-12-09 23:17:31.602427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:53.353 [2024-12-09 23:17:31.602873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:53.353 [2024-12-09 23:17:31.602895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:35:53.353 [2024-12-09 23:17:31.602910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.371 ms 00:35:53.353 [2024-12-09 23:17:31.602917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:53.353 [2024-12-09 23:17:31.652515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:53.353 [2024-12-09 23:17:31.652573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:35:53.353 [2024-12-09 23:17:31.652588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:53.353 [2024-12-09 23:17:31.652598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:53.353 [2024-12-09 23:17:31.652705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:53.353 [2024-12-09 23:17:31.652715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:35:53.353 [2024-12-09 23:17:31.652729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:53.353 [2024-12-09 23:17:31.652737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:53.353 [2024-12-09 23:17:31.652794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:53.353 [2024-12-09 23:17:31.652804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:35:53.353 [2024-12-09 23:17:31.652817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:53.353 [2024-12-09 23:17:31.652825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:53.353 [2024-12-09 23:17:31.652845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:53.353 [2024-12-09 23:17:31.652854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:35:53.353 [2024-12-09 23:17:31.652864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:53.353 [2024-12-09 23:17:31.652874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:53.353 [2024-12-09 23:17:31.739238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:53.353 [2024-12-09 23:17:31.739306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:35:53.353 [2024-12-09 23:17:31.739323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:53.353 [2024-12-09 23:17:31.739332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:53.615 [2024-12-09 23:17:31.811186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:53.615 [2024-12-09 23:17:31.811261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:35:53.615 [2024-12-09 23:17:31.811278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:53.615 [2024-12-09 23:17:31.811290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:53.615 [2024-12-09 23:17:31.811376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:53.615 [2024-12-09 23:17:31.811386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:35:53.615 [2024-12-09 23:17:31.811399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:53.615 [2024-12-09 23:17:31.811408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:53.615 [2024-12-09 23:17:31.811442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:53.615 [2024-12-09 23:17:31.811451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:35:53.615 [2024-12-09 23:17:31.811462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:53.615 [2024-12-09 23:17:31.811470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:53.615 [2024-12-09 23:17:31.811575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:53.615 [2024-12-09 23:17:31.811586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:35:53.615 [2024-12-09 23:17:31.811596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:53.615 [2024-12-09 23:17:31.811605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:53.615 [2024-12-09 23:17:31.811643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:53.615 [2024-12-09 23:17:31.811653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:35:53.615 [2024-12-09 23:17:31.811663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:53.615 [2024-12-09 23:17:31.811671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:53.615 [2024-12-09 23:17:31.811720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:53.615 [2024-12-09 23:17:31.811730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:35:53.615 [2024-12-09 23:17:31.811742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:53.615 [2024-12-09 23:17:31.811750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:53.615 [2024-12-09 23:17:31.811803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:53.615 [2024-12-09 23:17:31.811814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:35:53.615 [2024-12-09 23:17:31.811824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:53.615 [2024-12-09 23:17:31.811832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:53.615 [2024-12-09 23:17:31.811991] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 306.285 ms, result 0 00:35:54.562 23:17:32 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:35:54.562 [2024-12-09 23:17:32.962175] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:35:54.562 [2024-12-09 23:17:32.962616] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77205 ] 00:35:54.824 [2024-12-09 23:17:33.125303] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:54.824 [2024-12-09 23:17:33.258280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:55.396 [2024-12-09 23:17:33.562773] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:35:55.396 [2024-12-09 23:17:33.562853] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:35:55.396 [2024-12-09 23:17:33.725685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:55.396 [2024-12-09 23:17:33.725743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:35:55.396 [2024-12-09 23:17:33.725758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:35:55.396 [2024-12-09 23:17:33.725767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:55.396 [2024-12-09 23:17:33.728868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:55.396 [2024-12-09 23:17:33.728917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:35:55.396 [2024-12-09 23:17:33.728929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.079 ms 00:35:55.396 [2024-12-09 23:17:33.728938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:55.396 [2024-12-09 23:17:33.729062] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:35:55.396 [2024-12-09 23:17:33.729934] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:35:55.396 [2024-12-09 23:17:33.729973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:55.396 [2024-12-09 23:17:33.729983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:35:55.396 [2024-12-09 23:17:33.729993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.921 ms 00:35:55.396 [2024-12-09 23:17:33.730001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:55.396 [2024-12-09 23:17:33.731799] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:35:55.396 [2024-12-09 23:17:33.746177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:55.396 [2024-12-09 23:17:33.746234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:35:55.396 [2024-12-09 23:17:33.746248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.379 ms 00:35:55.396 [2024-12-09 23:17:33.746257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:55.396 [2024-12-09 23:17:33.746387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:55.396 [2024-12-09 23:17:33.746399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:35:55.396 [2024-12-09 23:17:33.746409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:35:55.396 [2024-12-09 23:17:33.746417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:55.396 [2024-12-09 23:17:33.755028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:55.396 [2024-12-09 23:17:33.755069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:35:55.396 [2024-12-09 23:17:33.755079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.564 ms 00:35:55.396 [2024-12-09 23:17:33.755087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:55.396 [2024-12-09 23:17:33.755197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:55.396 [2024-12-09 23:17:33.755207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:35:55.396 [2024-12-09 23:17:33.755243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:35:55.396 [2024-12-09 23:17:33.755253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:55.396 [2024-12-09 23:17:33.755287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:55.396 [2024-12-09 23:17:33.755297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:35:55.396 [2024-12-09 23:17:33.755305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:35:55.396 [2024-12-09 23:17:33.755315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:55.396 [2024-12-09 23:17:33.755339] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:35:55.396 [2024-12-09 23:17:33.759435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:55.396 [2024-12-09 23:17:33.759470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:35:55.396 [2024-12-09 23:17:33.759482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.103 ms 00:35:55.396 [2024-12-09 23:17:33.759490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:55.396 [2024-12-09 23:17:33.759576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:55.396 [2024-12-09 23:17:33.759587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:35:55.396 [2024-12-09 23:17:33.759596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:35:55.396 [2024-12-09 23:17:33.759604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:55.396 [2024-12-09 23:17:33.759630] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:35:55.396 [2024-12-09 23:17:33.759653] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:35:55.396 [2024-12-09 23:17:33.759689] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:35:55.396 [2024-12-09 23:17:33.759706] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:35:55.396 [2024-12-09 23:17:33.759811] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:35:55.396 [2024-12-09 23:17:33.759821] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:35:55.396 [2024-12-09 23:17:33.759832] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:35:55.396 [2024-12-09 23:17:33.759847] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:35:55.396 [2024-12-09 23:17:33.759856] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:35:55.396 [2024-12-09 23:17:33.759865] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:35:55.396 [2024-12-09 23:17:33.759873] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:35:55.396 [2024-12-09 23:17:33.759881] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:35:55.396 [2024-12-09 23:17:33.759888] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:35:55.396 [2024-12-09 23:17:33.759896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:55.396 [2024-12-09 23:17:33.759904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:35:55.396 [2024-12-09 23:17:33.759913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.269 ms 00:35:55.396 [2024-12-09 23:17:33.759920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:55.396 [2024-12-09 23:17:33.760008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:55.396 [2024-12-09 23:17:33.760019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:35:55.396 [2024-12-09 23:17:33.760027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:35:55.396 [2024-12-09 23:17:33.760034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:55.396 [2024-12-09 23:17:33.760133] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:35:55.396 [2024-12-09 23:17:33.760149] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:35:55.396 [2024-12-09 23:17:33.760158] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:35:55.396 [2024-12-09 23:17:33.760166] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:55.396 [2024-12-09 23:17:33.760174] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:35:55.397 [2024-12-09 23:17:33.760181] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:35:55.397 [2024-12-09 23:17:33.760188] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:35:55.397 [2024-12-09 23:17:33.760195] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:35:55.397 [2024-12-09 23:17:33.760202] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:35:55.397 [2024-12-09 23:17:33.760209] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:35:55.397 [2024-12-09 23:17:33.760236] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:35:55.397 [2024-12-09 23:17:33.760251] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:35:55.397 [2024-12-09 23:17:33.760259] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:35:55.397 [2024-12-09 23:17:33.760266] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:35:55.397 [2024-12-09 23:17:33.760273] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:35:55.397 [2024-12-09 23:17:33.760280] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:55.397 [2024-12-09 23:17:33.760287] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:35:55.397 [2024-12-09 23:17:33.760294] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:35:55.397 [2024-12-09 23:17:33.760300] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:55.397 [2024-12-09 23:17:33.760307] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:35:55.397 [2024-12-09 23:17:33.760315] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:35:55.397 [2024-12-09 23:17:33.760321] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:35:55.397 [2024-12-09 23:17:33.760328] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:35:55.397 [2024-12-09 23:17:33.760335] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:35:55.397 [2024-12-09 23:17:33.760342] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:35:55.397 [2024-12-09 23:17:33.760350] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:35:55.397 [2024-12-09 23:17:33.760357] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:35:55.397 [2024-12-09 23:17:33.760364] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:35:55.397 [2024-12-09 23:17:33.760371] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:35:55.397 [2024-12-09 23:17:33.760378] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:35:55.397 [2024-12-09 23:17:33.760385] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:35:55.397 [2024-12-09 23:17:33.760392] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:35:55.397 [2024-12-09 23:17:33.760398] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:35:55.397 [2024-12-09 23:17:33.760410] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:35:55.397 [2024-12-09 23:17:33.760417] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:35:55.397 [2024-12-09 23:17:33.760424] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:35:55.397 [2024-12-09 23:17:33.760431] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:35:55.397 [2024-12-09 23:17:33.760438] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:35:55.397 [2024-12-09 23:17:33.760445] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:35:55.397 [2024-12-09 23:17:33.760452] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:55.397 [2024-12-09 23:17:33.760458] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:35:55.397 [2024-12-09 23:17:33.760465] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:35:55.397 [2024-12-09 23:17:33.760472] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:55.397 [2024-12-09 23:17:33.760479] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:35:55.397 [2024-12-09 23:17:33.760487] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:35:55.397 [2024-12-09 23:17:33.760497] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:35:55.397 [2024-12-09 23:17:33.760504] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:55.397 [2024-12-09 23:17:33.760512] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:35:55.397 [2024-12-09 23:17:33.760519] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:35:55.397 [2024-12-09 23:17:33.760525] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:35:55.397 [2024-12-09 23:17:33.760532] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:35:55.397 [2024-12-09 23:17:33.760539] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:35:55.397 [2024-12-09 23:17:33.760546] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:35:55.397 [2024-12-09 23:17:33.760554] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:35:55.397 [2024-12-09 23:17:33.760563] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:35:55.397 [2024-12-09 23:17:33.760572] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:35:55.397 [2024-12-09 23:17:33.760579] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:35:55.397 [2024-12-09 23:17:33.760586] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:35:55.397 [2024-12-09 23:17:33.760593] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:35:55.397 [2024-12-09 23:17:33.760601] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:35:55.397 [2024-12-09 23:17:33.760608] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:35:55.397 [2024-12-09 23:17:33.760615] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:35:55.397 [2024-12-09 23:17:33.760622] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:35:55.397 [2024-12-09 23:17:33.760630] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:35:55.397 [2024-12-09 23:17:33.760637] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:35:55.397 [2024-12-09 23:17:33.760650] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:35:55.397 [2024-12-09 23:17:33.760657] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:35:55.397 [2024-12-09 23:17:33.760665] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:35:55.397 [2024-12-09 23:17:33.760672] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:35:55.397 [2024-12-09 23:17:33.760679] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:35:55.397 [2024-12-09 23:17:33.760687] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:35:55.397 [2024-12-09 23:17:33.760695] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:35:55.397 [2024-12-09 23:17:33.760703] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:35:55.397 [2024-12-09 23:17:33.760710] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:35:55.397 [2024-12-09 23:17:33.760718] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:35:55.397 [2024-12-09 23:17:33.760725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:55.397 [2024-12-09 23:17:33.760736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:35:55.397 [2024-12-09 23:17:33.760744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.661 ms 00:35:55.397 [2024-12-09 23:17:33.760751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:55.397 [2024-12-09 23:17:33.793322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:55.397 [2024-12-09 23:17:33.793456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:35:55.397 [2024-12-09 23:17:33.793469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.516 ms 00:35:55.397 [2024-12-09 23:17:33.793478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:55.397 [2024-12-09 23:17:33.793621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:55.397 [2024-12-09 23:17:33.793632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:35:55.397 [2024-12-09 23:17:33.793641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:35:55.397 [2024-12-09 23:17:33.793649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:55.397 [2024-12-09 23:17:33.840641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:55.397 [2024-12-09 23:17:33.840690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:35:55.397 [2024-12-09 23:17:33.840707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.967 ms 00:35:55.397 [2024-12-09 23:17:33.840717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:55.397 [2024-12-09 23:17:33.840835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:55.397 [2024-12-09 23:17:33.840847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:35:55.397 [2024-12-09 23:17:33.840857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:35:55.397 [2024-12-09 23:17:33.840866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:55.397 [2024-12-09 23:17:33.841510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:55.397 [2024-12-09 23:17:33.841547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:35:55.397 [2024-12-09 23:17:33.841573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.620 ms 00:35:55.397 [2024-12-09 23:17:33.841581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:55.397 [2024-12-09 23:17:33.841742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:55.397 [2024-12-09 23:17:33.841752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:35:55.397 [2024-12-09 23:17:33.841761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.127 ms 00:35:55.397 [2024-12-09 23:17:33.841768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:55.658 [2024-12-09 23:17:33.858471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:55.658 [2024-12-09 23:17:33.858512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:35:55.658 [2024-12-09 23:17:33.858523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.679 ms 00:35:55.658 [2024-12-09 23:17:33.858531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:55.658 [2024-12-09 23:17:33.873084] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:35:55.658 [2024-12-09 23:17:33.873131] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:35:55.658 [2024-12-09 23:17:33.873146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:55.658 [2024-12-09 23:17:33.873156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:35:55.658 [2024-12-09 23:17:33.873166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.491 ms 00:35:55.658 [2024-12-09 23:17:33.873174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:55.658 [2024-12-09 23:17:33.899063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:55.658 [2024-12-09 23:17:33.899110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:35:55.658 [2024-12-09 23:17:33.899123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.768 ms 00:35:55.658 [2024-12-09 23:17:33.899131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:55.658 [2024-12-09 23:17:33.912371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:55.658 [2024-12-09 23:17:33.912415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:35:55.658 [2024-12-09 23:17:33.912428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.120 ms 00:35:55.658 [2024-12-09 23:17:33.912435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:55.658 [2024-12-09 23:17:33.925526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:55.658 [2024-12-09 23:17:33.925568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:35:55.658 [2024-12-09 23:17:33.925580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.994 ms 00:35:55.658 [2024-12-09 23:17:33.925589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:55.658 [2024-12-09 23:17:33.926266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:55.658 [2024-12-09 23:17:33.926297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:35:55.658 [2024-12-09 23:17:33.926308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.550 ms 00:35:55.658 [2024-12-09 23:17:33.926316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:55.658 [2024-12-09 23:17:33.995633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:55.658 [2024-12-09 23:17:33.995699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:35:55.658 [2024-12-09 23:17:33.995716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 69.278 ms 00:35:55.658 [2024-12-09 23:17:33.995725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:55.658 [2024-12-09 23:17:34.007530] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:35:55.658 [2024-12-09 23:17:34.027598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:55.658 [2024-12-09 23:17:34.027644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:35:55.658 [2024-12-09 23:17:34.027659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.749 ms 00:35:55.658 [2024-12-09 23:17:34.027674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:55.658 [2024-12-09 23:17:34.027777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:55.658 [2024-12-09 23:17:34.027789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:35:55.658 [2024-12-09 23:17:34.027799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:35:55.658 [2024-12-09 23:17:34.027808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:55.658 [2024-12-09 23:17:34.027868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:55.658 [2024-12-09 23:17:34.027878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:35:55.658 [2024-12-09 23:17:34.027887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:35:55.658 [2024-12-09 23:17:34.027900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:55.658 [2024-12-09 23:17:34.027933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:55.658 [2024-12-09 23:17:34.027942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:35:55.658 [2024-12-09 23:17:34.027951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:35:55.658 [2024-12-09 23:17:34.027959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:55.658 [2024-12-09 23:17:34.027998] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:35:55.658 [2024-12-09 23:17:34.028009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:55.658 [2024-12-09 23:17:34.028017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:35:55.658 [2024-12-09 23:17:34.028026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:35:55.658 [2024-12-09 23:17:34.028033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:55.658 [2024-12-09 23:17:34.054737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:55.658 [2024-12-09 23:17:34.054787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:35:55.658 [2024-12-09 23:17:34.054803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.681 ms 00:35:55.658 [2024-12-09 23:17:34.054812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:55.658 [2024-12-09 23:17:34.054945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:55.658 [2024-12-09 23:17:34.054958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:35:55.659 [2024-12-09 23:17:34.054977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:35:55.659 [2024-12-09 23:17:34.054986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:55.659 [2024-12-09 23:17:34.056144] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:35:55.659 [2024-12-09 23:17:34.059759] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 330.127 ms, result 0 00:35:55.659 [2024-12-09 23:17:34.061164] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:35:55.659 [2024-12-09 23:17:34.074837] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:35:57.111  [2024-12-09T23:17:36.516Z] Copying: 13/256 [MB] (13 MBps) [2024-12-09T23:17:37.455Z] Copying: 24180/262144 [kB] (10180 kBps) [2024-12-09T23:17:38.397Z] Copying: 34320/262144 [kB] (10140 kBps) [2024-12-09T23:17:39.337Z] Copying: 58/256 [MB] (25 MBps) [2024-12-09T23:17:40.722Z] Copying: 69/256 [MB] (10 MBps) [2024-12-09T23:17:41.665Z] Copying: 83/256 [MB] (13 MBps) [2024-12-09T23:17:42.609Z] Copying: 104/256 [MB] (21 MBps) [2024-12-09T23:17:43.575Z] Copying: 127/256 [MB] (22 MBps) [2024-12-09T23:17:44.522Z] Copying: 140/256 [MB] (12 MBps) [2024-12-09T23:17:45.462Z] Copying: 154/256 [MB] (14 MBps) [2024-12-09T23:17:46.404Z] Copying: 182/256 [MB] (28 MBps) [2024-12-09T23:17:47.345Z] Copying: 220/256 [MB] (37 MBps) [2024-12-09T23:17:47.606Z] Copying: 256/256 [MB] (average 19 MBps)[2024-12-09 23:17:47.481250] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:36:09.144 [2024-12-09 23:17:47.491638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:09.144 [2024-12-09 23:17:47.491679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:36:09.144 [2024-12-09 23:17:47.491698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:36:09.144 [2024-12-09 23:17:47.491707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:09.144 [2024-12-09 23:17:47.491730] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:36:09.144 [2024-12-09 23:17:47.495067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:09.144 [2024-12-09 23:17:47.495101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:36:09.144 [2024-12-09 23:17:47.495111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.322 ms 00:36:09.144 [2024-12-09 23:17:47.495120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:09.144 [2024-12-09 23:17:47.495426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:09.144 [2024-12-09 23:17:47.495448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:36:09.144 [2024-12-09 23:17:47.495458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.281 ms 00:36:09.144 [2024-12-09 23:17:47.495465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:09.144 [2024-12-09 23:17:47.499768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:09.144 [2024-12-09 23:17:47.499800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:36:09.144 [2024-12-09 23:17:47.499810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.281 ms 00:36:09.144 [2024-12-09 23:17:47.499819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:09.144 [2024-12-09 23:17:47.506708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:09.144 [2024-12-09 23:17:47.506738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:36:09.144 [2024-12-09 23:17:47.506749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.868 ms 00:36:09.144 [2024-12-09 23:17:47.506757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:09.144 [2024-12-09 23:17:47.529496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:09.144 [2024-12-09 23:17:47.529528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:36:09.144 [2024-12-09 23:17:47.529539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.678 ms 00:36:09.144 [2024-12-09 23:17:47.529546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:09.144 [2024-12-09 23:17:47.543526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:09.144 [2024-12-09 23:17:47.543557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:36:09.144 [2024-12-09 23:17:47.543573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.958 ms 00:36:09.144 [2024-12-09 23:17:47.543580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:09.144 [2024-12-09 23:17:47.543717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:09.144 [2024-12-09 23:17:47.543732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:36:09.144 [2024-12-09 23:17:47.543746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:36:09.144 [2024-12-09 23:17:47.543754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:09.144 [2024-12-09 23:17:47.566745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:09.144 [2024-12-09 23:17:47.566782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:36:09.144 [2024-12-09 23:17:47.566793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.975 ms 00:36:09.144 [2024-12-09 23:17:47.566801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:09.144 [2024-12-09 23:17:47.588649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:09.144 [2024-12-09 23:17:47.588687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:36:09.144 [2024-12-09 23:17:47.588697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.827 ms 00:36:09.144 [2024-12-09 23:17:47.588704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:09.415 [2024-12-09 23:17:47.610856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:09.415 [2024-12-09 23:17:47.610889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:36:09.415 [2024-12-09 23:17:47.610899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.110 ms 00:36:09.415 [2024-12-09 23:17:47.610906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:09.415 [2024-12-09 23:17:47.632508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:09.415 [2024-12-09 23:17:47.632540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:36:09.415 [2024-12-09 23:17:47.632550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.556 ms 00:36:09.415 [2024-12-09 23:17:47.632557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:09.415 [2024-12-09 23:17:47.632578] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:36:09.415 [2024-12-09 23:17:47.632591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:36:09.415 [2024-12-09 23:17:47.632601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:36:09.415 [2024-12-09 23:17:47.632609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:36:09.415 [2024-12-09 23:17:47.632616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:36:09.415 [2024-12-09 23:17:47.632624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:36:09.415 [2024-12-09 23:17:47.632631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:36:09.415 [2024-12-09 23:17:47.632638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:36:09.415 [2024-12-09 23:17:47.632645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:36:09.415 [2024-12-09 23:17:47.632653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:36:09.415 [2024-12-09 23:17:47.632660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:36:09.415 [2024-12-09 23:17:47.632668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:36:09.415 [2024-12-09 23:17:47.632675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:36:09.415 [2024-12-09 23:17:47.632682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:36:09.416 [2024-12-09 23:17:47.632689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:36:09.416 [2024-12-09 23:17:47.632696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:36:09.416 [2024-12-09 23:17:47.632703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:36:09.416 [2024-12-09 23:17:47.632710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:36:09.416 [2024-12-09 23:17:47.632717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:36:09.416 [2024-12-09 23:17:47.632725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:36:09.416 [2024-12-09 23:17:47.632732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:36:09.416 [2024-12-09 23:17:47.632739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:36:09.416 [2024-12-09 23:17:47.632747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:36:09.416 [2024-12-09 23:17:47.632754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:36:09.416 [2024-12-09 23:17:47.632761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:36:09.416 [2024-12-09 23:17:47.632768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:36:09.416 [2024-12-09 23:17:47.632775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:36:09.416 [2024-12-09 23:17:47.632782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:36:09.416 [2024-12-09 23:17:47.632789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:36:09.416 [2024-12-09 23:17:47.632796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:36:09.416 [2024-12-09 23:17:47.632806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:36:09.416 [2024-12-09 23:17:47.632817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:36:09.416 [2024-12-09 23:17:47.632830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:36:09.416 [2024-12-09 23:17:47.632844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:36:09.416 [2024-12-09 23:17:47.632856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:36:09.416 [2024-12-09 23:17:47.632868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:36:09.416 [2024-12-09 23:17:47.632880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:36:09.416 [2024-12-09 23:17:47.632892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:36:09.416 [2024-12-09 23:17:47.632904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:36:09.416 [2024-12-09 23:17:47.632915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:36:09.416 [2024-12-09 23:17:47.632923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:36:09.416 [2024-12-09 23:17:47.632934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:36:09.416 [2024-12-09 23:17:47.632946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:36:09.416 [2024-12-09 23:17:47.632957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:36:09.416 [2024-12-09 23:17:47.632969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:36:09.416 [2024-12-09 23:17:47.632982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:36:09.416 [2024-12-09 23:17:47.632993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:36:09.416 [2024-12-09 23:17:47.633005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:36:09.416 [2024-12-09 23:17:47.633014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:36:09.416 [2024-12-09 23:17:47.633021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:36:09.416 [2024-12-09 23:17:47.633028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:36:09.416 [2024-12-09 23:17:47.633036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:36:09.416 [2024-12-09 23:17:47.633043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:36:09.416 [2024-12-09 23:17:47.633051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:36:09.416 [2024-12-09 23:17:47.633057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:36:09.416 [2024-12-09 23:17:47.633065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:36:09.416 [2024-12-09 23:17:47.633076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:36:09.416 [2024-12-09 23:17:47.633089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:36:09.416 [2024-12-09 23:17:47.633100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:36:09.416 [2024-12-09 23:17:47.633112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:36:09.416 [2024-12-09 23:17:47.633124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:36:09.416 [2024-12-09 23:17:47.633136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:36:09.416 [2024-12-09 23:17:47.633148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:36:09.416 [2024-12-09 23:17:47.633160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:36:09.416 [2024-12-09 23:17:47.633174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:36:09.416 [2024-12-09 23:17:47.633187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:36:09.416 [2024-12-09 23:17:47.633199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:36:09.416 [2024-12-09 23:17:47.633212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:36:09.416 [2024-12-09 23:17:47.633243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:36:09.416 [2024-12-09 23:17:47.633257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:36:09.416 [2024-12-09 23:17:47.633269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:36:09.416 [2024-12-09 23:17:47.633281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:36:09.416 [2024-12-09 23:17:47.633292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:36:09.416 [2024-12-09 23:17:47.633304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:36:09.416 [2024-12-09 23:17:47.633315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:36:09.416 [2024-12-09 23:17:47.633327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:36:09.416 [2024-12-09 23:17:47.633339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:36:09.416 [2024-12-09 23:17:47.633350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:36:09.416 [2024-12-09 23:17:47.633358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:36:09.416 [2024-12-09 23:17:47.633365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:36:09.416 [2024-12-09 23:17:47.633372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:36:09.416 [2024-12-09 23:17:47.633380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:36:09.416 [2024-12-09 23:17:47.633387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:36:09.416 [2024-12-09 23:17:47.633395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:36:09.416 [2024-12-09 23:17:47.633407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:36:09.416 [2024-12-09 23:17:47.633419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:36:09.416 [2024-12-09 23:17:47.633431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:36:09.416 [2024-12-09 23:17:47.633459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:36:09.416 [2024-12-09 23:17:47.633471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:36:09.416 [2024-12-09 23:17:47.633483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:36:09.416 [2024-12-09 23:17:47.633495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:36:09.416 [2024-12-09 23:17:47.633507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:36:09.416 [2024-12-09 23:17:47.633520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:36:09.416 [2024-12-09 23:17:47.633531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:36:09.416 [2024-12-09 23:17:47.633550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:36:09.416 [2024-12-09 23:17:47.633558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:36:09.416 [2024-12-09 23:17:47.633566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:36:09.416 [2024-12-09 23:17:47.633574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:36:09.416 [2024-12-09 23:17:47.633581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:36:09.416 [2024-12-09 23:17:47.633589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:36:09.416 [2024-12-09 23:17:47.633596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:36:09.416 [2024-12-09 23:17:47.633611] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:36:09.416 [2024-12-09 23:17:47.633619] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 8d08cbd7-a528-4f3d-b495-445a47785ac7 00:36:09.416 [2024-12-09 23:17:47.633627] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:36:09.416 [2024-12-09 23:17:47.633635] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:36:09.416 [2024-12-09 23:17:47.633641] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:36:09.416 [2024-12-09 23:17:47.633649] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:36:09.416 [2024-12-09 23:17:47.633655] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:36:09.416 [2024-12-09 23:17:47.633663] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:36:09.416 [2024-12-09 23:17:47.633672] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:36:09.416 [2024-12-09 23:17:47.633678] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:36:09.416 [2024-12-09 23:17:47.633688] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:36:09.416 [2024-12-09 23:17:47.633700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:09.416 [2024-12-09 23:17:47.633712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:36:09.416 [2024-12-09 23:17:47.633725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.123 ms 00:36:09.416 [2024-12-09 23:17:47.633735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:09.416 [2024-12-09 23:17:47.645965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:09.416 [2024-12-09 23:17:47.645997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:36:09.416 [2024-12-09 23:17:47.646007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.187 ms 00:36:09.416 [2024-12-09 23:17:47.646015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:09.416 [2024-12-09 23:17:47.646507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:09.416 [2024-12-09 23:17:47.646533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:36:09.417 [2024-12-09 23:17:47.646542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.451 ms 00:36:09.417 [2024-12-09 23:17:47.646550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:09.417 [2024-12-09 23:17:47.681044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:09.417 [2024-12-09 23:17:47.681085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:36:09.417 [2024-12-09 23:17:47.681096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:09.417 [2024-12-09 23:17:47.681107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:09.417 [2024-12-09 23:17:47.681180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:09.417 [2024-12-09 23:17:47.681189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:36:09.417 [2024-12-09 23:17:47.681197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:09.417 [2024-12-09 23:17:47.681204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:09.417 [2024-12-09 23:17:47.681254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:09.417 [2024-12-09 23:17:47.681263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:36:09.417 [2024-12-09 23:17:47.681271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:09.417 [2024-12-09 23:17:47.681278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:09.417 [2024-12-09 23:17:47.681298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:09.417 [2024-12-09 23:17:47.681306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:36:09.417 [2024-12-09 23:17:47.681313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:09.417 [2024-12-09 23:17:47.681322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:09.417 [2024-12-09 23:17:47.756610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:09.417 [2024-12-09 23:17:47.756655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:36:09.417 [2024-12-09 23:17:47.756666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:09.417 [2024-12-09 23:17:47.756674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:09.417 [2024-12-09 23:17:47.818889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:09.417 [2024-12-09 23:17:47.818932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:36:09.417 [2024-12-09 23:17:47.818943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:09.417 [2024-12-09 23:17:47.818951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:09.417 [2024-12-09 23:17:47.818999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:09.417 [2024-12-09 23:17:47.819008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:36:09.417 [2024-12-09 23:17:47.819016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:09.417 [2024-12-09 23:17:47.819023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:09.417 [2024-12-09 23:17:47.819050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:09.417 [2024-12-09 23:17:47.819063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:36:09.417 [2024-12-09 23:17:47.819070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:09.417 [2024-12-09 23:17:47.819077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:09.417 [2024-12-09 23:17:47.819162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:09.417 [2024-12-09 23:17:47.819171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:36:09.417 [2024-12-09 23:17:47.819179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:09.417 [2024-12-09 23:17:47.819186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:09.417 [2024-12-09 23:17:47.819248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:09.417 [2024-12-09 23:17:47.819262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:36:09.417 [2024-12-09 23:17:47.819277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:09.417 [2024-12-09 23:17:47.819288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:09.417 [2024-12-09 23:17:47.819333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:09.417 [2024-12-09 23:17:47.819348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:36:09.417 [2024-12-09 23:17:47.819360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:09.417 [2024-12-09 23:17:47.819371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:09.417 [2024-12-09 23:17:47.819418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:09.417 [2024-12-09 23:17:47.819433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:36:09.417 [2024-12-09 23:17:47.819446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:09.417 [2024-12-09 23:17:47.819457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:09.417 [2024-12-09 23:17:47.819611] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 327.968 ms, result 0 00:36:10.366 00:36:10.366 00:36:10.366 23:17:48 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:36:10.627 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:36:10.627 23:17:49 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:36:10.627 23:17:49 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:36:10.627 23:17:49 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:36:10.627 23:17:49 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:36:10.627 23:17:49 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:36:10.888 23:17:49 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:36:10.888 23:17:49 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 77140 00:36:10.888 23:17:49 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 77140 ']' 00:36:10.888 23:17:49 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 77140 00:36:10.888 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (77140) - No such process 00:36:10.888 Process with pid 77140 is not found 00:36:10.888 23:17:49 ftl.ftl_trim -- common/autotest_common.sh@981 -- # echo 'Process with pid 77140 is not found' 00:36:10.888 00:36:10.888 real 1m10.210s 00:36:10.888 user 1m39.885s 00:36:10.888 sys 0m5.694s 00:36:10.888 ************************************ 00:36:10.888 END TEST ftl_trim 00:36:10.888 ************************************ 00:36:10.888 23:17:49 ftl.ftl_trim -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:10.888 23:17:49 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:36:10.888 23:17:49 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:36:10.888 23:17:49 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:36:10.888 23:17:49 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:10.888 23:17:49 ftl -- common/autotest_common.sh@10 -- # set +x 00:36:10.888 ************************************ 00:36:10.888 START TEST ftl_restore 00:36:10.888 ************************************ 00:36:10.888 23:17:49 ftl.ftl_restore -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:36:10.888 * Looking for test storage... 00:36:10.888 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:36:10.888 23:17:49 ftl.ftl_restore -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:36:10.888 23:17:49 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # lcov --version 00:36:10.888 23:17:49 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:36:11.149 23:17:49 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:36:11.149 23:17:49 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:11.149 23:17:49 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:11.149 23:17:49 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:11.149 23:17:49 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:36:11.149 23:17:49 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:36:11.149 23:17:49 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:36:11.149 23:17:49 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:36:11.149 23:17:49 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:36:11.149 23:17:49 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:36:11.149 23:17:49 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:36:11.149 23:17:49 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:11.149 23:17:49 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:36:11.149 23:17:49 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:36:11.149 23:17:49 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:11.149 23:17:49 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:11.149 23:17:49 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:36:11.149 23:17:49 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:36:11.149 23:17:49 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:11.149 23:17:49 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:36:11.149 23:17:49 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:36:11.149 23:17:49 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:36:11.149 23:17:49 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:36:11.149 23:17:49 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:11.149 23:17:49 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:36:11.149 23:17:49 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:36:11.149 23:17:49 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:11.149 23:17:49 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:11.149 23:17:49 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:36:11.149 23:17:49 ftl.ftl_restore -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:11.149 23:17:49 ftl.ftl_restore -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:36:11.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:11.149 --rc genhtml_branch_coverage=1 00:36:11.149 --rc genhtml_function_coverage=1 00:36:11.149 --rc genhtml_legend=1 00:36:11.149 --rc geninfo_all_blocks=1 00:36:11.149 --rc geninfo_unexecuted_blocks=1 00:36:11.149 00:36:11.149 ' 00:36:11.149 23:17:49 ftl.ftl_restore -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:36:11.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:11.149 --rc genhtml_branch_coverage=1 00:36:11.149 --rc genhtml_function_coverage=1 00:36:11.149 --rc genhtml_legend=1 00:36:11.149 --rc geninfo_all_blocks=1 00:36:11.149 --rc geninfo_unexecuted_blocks=1 00:36:11.149 00:36:11.149 ' 00:36:11.149 23:17:49 ftl.ftl_restore -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:36:11.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:11.149 --rc genhtml_branch_coverage=1 00:36:11.149 --rc genhtml_function_coverage=1 00:36:11.149 --rc genhtml_legend=1 00:36:11.149 --rc geninfo_all_blocks=1 00:36:11.149 --rc geninfo_unexecuted_blocks=1 00:36:11.149 00:36:11.149 ' 00:36:11.149 23:17:49 ftl.ftl_restore -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:36:11.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:11.149 --rc genhtml_branch_coverage=1 00:36:11.149 --rc genhtml_function_coverage=1 00:36:11.149 --rc genhtml_legend=1 00:36:11.149 --rc geninfo_all_blocks=1 00:36:11.149 --rc geninfo_unexecuted_blocks=1 00:36:11.149 00:36:11.149 ' 00:36:11.149 23:17:49 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:36:11.149 23:17:49 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:36:11.149 23:17:49 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:36:11.149 23:17:49 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:36:11.149 23:17:49 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:36:11.149 23:17:49 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:36:11.149 23:17:49 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:36:11.149 23:17:49 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:36:11.149 23:17:49 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:36:11.149 23:17:49 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:36:11.149 23:17:49 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:36:11.149 23:17:49 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:36:11.149 23:17:49 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:36:11.149 23:17:49 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:36:11.149 23:17:49 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:36:11.149 23:17:49 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:36:11.149 23:17:49 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:36:11.149 23:17:49 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:36:11.149 23:17:49 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:36:11.149 23:17:49 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:36:11.149 23:17:49 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:36:11.149 23:17:49 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:36:11.149 23:17:49 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:36:11.149 23:17:49 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:36:11.149 23:17:49 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:36:11.149 23:17:49 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:36:11.149 23:17:49 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:36:11.149 23:17:49 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:36:11.149 23:17:49 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:36:11.149 23:17:49 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:36:11.149 23:17:49 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:36:11.149 23:17:49 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.6IiP82o1ep 00:36:11.149 23:17:49 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:36:11.149 23:17:49 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:36:11.149 23:17:49 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:36:11.149 23:17:49 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:36:11.149 23:17:49 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:36:11.149 23:17:49 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:36:11.149 23:17:49 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:36:11.149 23:17:49 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:36:11.149 23:17:49 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=77441 00:36:11.149 23:17:49 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 77441 00:36:11.149 23:17:49 ftl.ftl_restore -- common/autotest_common.sh@835 -- # '[' -z 77441 ']' 00:36:11.149 23:17:49 ftl.ftl_restore -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:11.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:11.150 23:17:49 ftl.ftl_restore -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:11.150 23:17:49 ftl.ftl_restore -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:11.150 23:17:49 ftl.ftl_restore -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:11.150 23:17:49 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:36:11.150 23:17:49 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:36:11.150 [2024-12-09 23:17:49.505356] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:36:11.150 [2024-12-09 23:17:49.505521] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77441 ] 00:36:11.410 [2024-12-09 23:17:49.672023] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:11.410 [2024-12-09 23:17:49.802754] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:12.355 23:17:50 ftl.ftl_restore -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:12.355 23:17:50 ftl.ftl_restore -- common/autotest_common.sh@868 -- # return 0 00:36:12.355 23:17:50 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:36:12.355 23:17:50 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:36:12.355 23:17:50 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:36:12.355 23:17:50 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:36:12.355 23:17:50 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:36:12.355 23:17:50 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:36:12.355 23:17:50 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:36:12.355 23:17:50 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:36:12.617 23:17:50 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:36:12.617 23:17:50 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:36:12.617 23:17:50 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:36:12.617 23:17:50 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:36:12.617 23:17:50 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:36:12.617 23:17:50 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:36:12.617 23:17:50 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:36:12.617 { 00:36:12.617 "name": "nvme0n1", 00:36:12.617 "aliases": [ 00:36:12.617 "79017cde-1a1f-43b6-9a11-8e36f1184ff2" 00:36:12.617 ], 00:36:12.617 "product_name": "NVMe disk", 00:36:12.617 "block_size": 4096, 00:36:12.617 "num_blocks": 1310720, 00:36:12.617 "uuid": "79017cde-1a1f-43b6-9a11-8e36f1184ff2", 00:36:12.617 "numa_id": -1, 00:36:12.617 "assigned_rate_limits": { 00:36:12.617 "rw_ios_per_sec": 0, 00:36:12.617 "rw_mbytes_per_sec": 0, 00:36:12.617 "r_mbytes_per_sec": 0, 00:36:12.617 "w_mbytes_per_sec": 0 00:36:12.617 }, 00:36:12.617 "claimed": true, 00:36:12.617 "claim_type": "read_many_write_one", 00:36:12.617 "zoned": false, 00:36:12.617 "supported_io_types": { 00:36:12.617 "read": true, 00:36:12.617 "write": true, 00:36:12.617 "unmap": true, 00:36:12.617 "flush": true, 00:36:12.617 "reset": true, 00:36:12.617 "nvme_admin": true, 00:36:12.617 "nvme_io": true, 00:36:12.617 "nvme_io_md": false, 00:36:12.617 "write_zeroes": true, 00:36:12.617 "zcopy": false, 00:36:12.617 "get_zone_info": false, 00:36:12.617 "zone_management": false, 00:36:12.617 "zone_append": false, 00:36:12.617 "compare": true, 00:36:12.617 "compare_and_write": false, 00:36:12.617 "abort": true, 00:36:12.617 "seek_hole": false, 00:36:12.617 "seek_data": false, 00:36:12.617 "copy": true, 00:36:12.618 "nvme_iov_md": false 00:36:12.618 }, 00:36:12.618 "driver_specific": { 00:36:12.618 "nvme": [ 00:36:12.618 { 00:36:12.618 "pci_address": "0000:00:11.0", 00:36:12.618 "trid": { 00:36:12.618 "trtype": "PCIe", 00:36:12.618 "traddr": "0000:00:11.0" 00:36:12.618 }, 00:36:12.618 "ctrlr_data": { 00:36:12.618 "cntlid": 0, 00:36:12.618 "vendor_id": "0x1b36", 00:36:12.618 "model_number": "QEMU NVMe Ctrl", 00:36:12.618 "serial_number": "12341", 00:36:12.618 "firmware_revision": "8.0.0", 00:36:12.618 "subnqn": "nqn.2019-08.org.qemu:12341", 00:36:12.618 "oacs": { 00:36:12.618 "security": 0, 00:36:12.618 "format": 1, 00:36:12.618 "firmware": 0, 00:36:12.618 "ns_manage": 1 00:36:12.618 }, 00:36:12.618 "multi_ctrlr": false, 00:36:12.618 "ana_reporting": false 00:36:12.618 }, 00:36:12.618 "vs": { 00:36:12.618 "nvme_version": "1.4" 00:36:12.618 }, 00:36:12.618 "ns_data": { 00:36:12.618 "id": 1, 00:36:12.618 "can_share": false 00:36:12.618 } 00:36:12.618 } 00:36:12.618 ], 00:36:12.618 "mp_policy": "active_passive" 00:36:12.618 } 00:36:12.618 } 00:36:12.618 ]' 00:36:12.618 23:17:50 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:36:12.618 23:17:51 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:36:12.618 23:17:51 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:36:12.618 23:17:51 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=1310720 00:36:12.618 23:17:51 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:36:12.618 23:17:51 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 5120 00:36:12.618 23:17:51 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:36:12.618 23:17:51 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:36:12.618 23:17:51 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:36:12.618 23:17:51 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:36:12.618 23:17:51 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:36:12.879 23:17:51 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=a6a0c9f7-00d6-434f-9541-c735c2a3f4ce 00:36:12.879 23:17:51 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:36:12.879 23:17:51 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a6a0c9f7-00d6-434f-9541-c735c2a3f4ce 00:36:13.140 23:17:51 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:36:13.401 23:17:51 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=e68f867e-5095-49c3-932b-2759c08571fc 00:36:13.401 23:17:51 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u e68f867e-5095-49c3-932b-2759c08571fc 00:36:13.663 23:17:51 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=0dfc02de-1811-49cb-bcc8-ccccb3657390 00:36:13.663 23:17:51 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:36:13.663 23:17:51 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 0dfc02de-1811-49cb-bcc8-ccccb3657390 00:36:13.663 23:17:51 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:36:13.663 23:17:51 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:36:13.663 23:17:51 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=0dfc02de-1811-49cb-bcc8-ccccb3657390 00:36:13.663 23:17:51 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:36:13.663 23:17:51 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size 0dfc02de-1811-49cb-bcc8-ccccb3657390 00:36:13.663 23:17:51 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=0dfc02de-1811-49cb-bcc8-ccccb3657390 00:36:13.663 23:17:51 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:36:13.663 23:17:51 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:36:13.663 23:17:51 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:36:13.663 23:17:51 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0dfc02de-1811-49cb-bcc8-ccccb3657390 00:36:13.663 23:17:52 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:36:13.663 { 00:36:13.663 "name": "0dfc02de-1811-49cb-bcc8-ccccb3657390", 00:36:13.663 "aliases": [ 00:36:13.663 "lvs/nvme0n1p0" 00:36:13.663 ], 00:36:13.663 "product_name": "Logical Volume", 00:36:13.663 "block_size": 4096, 00:36:13.663 "num_blocks": 26476544, 00:36:13.663 "uuid": "0dfc02de-1811-49cb-bcc8-ccccb3657390", 00:36:13.663 "assigned_rate_limits": { 00:36:13.663 "rw_ios_per_sec": 0, 00:36:13.663 "rw_mbytes_per_sec": 0, 00:36:13.663 "r_mbytes_per_sec": 0, 00:36:13.663 "w_mbytes_per_sec": 0 00:36:13.663 }, 00:36:13.663 "claimed": false, 00:36:13.663 "zoned": false, 00:36:13.663 "supported_io_types": { 00:36:13.663 "read": true, 00:36:13.663 "write": true, 00:36:13.663 "unmap": true, 00:36:13.663 "flush": false, 00:36:13.663 "reset": true, 00:36:13.663 "nvme_admin": false, 00:36:13.663 "nvme_io": false, 00:36:13.663 "nvme_io_md": false, 00:36:13.663 "write_zeroes": true, 00:36:13.663 "zcopy": false, 00:36:13.663 "get_zone_info": false, 00:36:13.663 "zone_management": false, 00:36:13.663 "zone_append": false, 00:36:13.663 "compare": false, 00:36:13.663 "compare_and_write": false, 00:36:13.663 "abort": false, 00:36:13.663 "seek_hole": true, 00:36:13.663 "seek_data": true, 00:36:13.663 "copy": false, 00:36:13.663 "nvme_iov_md": false 00:36:13.663 }, 00:36:13.663 "driver_specific": { 00:36:13.663 "lvol": { 00:36:13.663 "lvol_store_uuid": "e68f867e-5095-49c3-932b-2759c08571fc", 00:36:13.663 "base_bdev": "nvme0n1", 00:36:13.663 "thin_provision": true, 00:36:13.663 "num_allocated_clusters": 0, 00:36:13.663 "snapshot": false, 00:36:13.663 "clone": false, 00:36:13.663 "esnap_clone": false 00:36:13.663 } 00:36:13.663 } 00:36:13.663 } 00:36:13.663 ]' 00:36:13.663 23:17:52 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:36:13.663 23:17:52 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:36:13.663 23:17:52 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:36:13.925 23:17:52 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:36:13.925 23:17:52 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:36:13.925 23:17:52 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:36:13.925 23:17:52 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:36:13.925 23:17:52 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:36:13.925 23:17:52 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:36:14.187 23:17:52 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:36:14.187 23:17:52 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:36:14.187 23:17:52 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size 0dfc02de-1811-49cb-bcc8-ccccb3657390 00:36:14.188 23:17:52 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=0dfc02de-1811-49cb-bcc8-ccccb3657390 00:36:14.188 23:17:52 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:36:14.188 23:17:52 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:36:14.188 23:17:52 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:36:14.188 23:17:52 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0dfc02de-1811-49cb-bcc8-ccccb3657390 00:36:14.188 23:17:52 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:36:14.188 { 00:36:14.188 "name": "0dfc02de-1811-49cb-bcc8-ccccb3657390", 00:36:14.188 "aliases": [ 00:36:14.188 "lvs/nvme0n1p0" 00:36:14.188 ], 00:36:14.188 "product_name": "Logical Volume", 00:36:14.188 "block_size": 4096, 00:36:14.188 "num_blocks": 26476544, 00:36:14.188 "uuid": "0dfc02de-1811-49cb-bcc8-ccccb3657390", 00:36:14.188 "assigned_rate_limits": { 00:36:14.188 "rw_ios_per_sec": 0, 00:36:14.188 "rw_mbytes_per_sec": 0, 00:36:14.188 "r_mbytes_per_sec": 0, 00:36:14.188 "w_mbytes_per_sec": 0 00:36:14.188 }, 00:36:14.188 "claimed": false, 00:36:14.188 "zoned": false, 00:36:14.188 "supported_io_types": { 00:36:14.188 "read": true, 00:36:14.188 "write": true, 00:36:14.188 "unmap": true, 00:36:14.188 "flush": false, 00:36:14.188 "reset": true, 00:36:14.188 "nvme_admin": false, 00:36:14.188 "nvme_io": false, 00:36:14.188 "nvme_io_md": false, 00:36:14.188 "write_zeroes": true, 00:36:14.188 "zcopy": false, 00:36:14.188 "get_zone_info": false, 00:36:14.188 "zone_management": false, 00:36:14.188 "zone_append": false, 00:36:14.188 "compare": false, 00:36:14.188 "compare_and_write": false, 00:36:14.188 "abort": false, 00:36:14.188 "seek_hole": true, 00:36:14.188 "seek_data": true, 00:36:14.188 "copy": false, 00:36:14.188 "nvme_iov_md": false 00:36:14.188 }, 00:36:14.188 "driver_specific": { 00:36:14.188 "lvol": { 00:36:14.188 "lvol_store_uuid": "e68f867e-5095-49c3-932b-2759c08571fc", 00:36:14.188 "base_bdev": "nvme0n1", 00:36:14.188 "thin_provision": true, 00:36:14.188 "num_allocated_clusters": 0, 00:36:14.188 "snapshot": false, 00:36:14.188 "clone": false, 00:36:14.188 "esnap_clone": false 00:36:14.188 } 00:36:14.188 } 00:36:14.188 } 00:36:14.188 ]' 00:36:14.188 23:17:52 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:36:14.188 23:17:52 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:36:14.188 23:17:52 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:36:14.450 23:17:52 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:36:14.450 23:17:52 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:36:14.450 23:17:52 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:36:14.450 23:17:52 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:36:14.450 23:17:52 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:36:14.450 23:17:52 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:36:14.450 23:17:52 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size 0dfc02de-1811-49cb-bcc8-ccccb3657390 00:36:14.450 23:17:52 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=0dfc02de-1811-49cb-bcc8-ccccb3657390 00:36:14.450 23:17:52 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:36:14.450 23:17:52 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:36:14.450 23:17:52 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:36:14.450 23:17:52 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0dfc02de-1811-49cb-bcc8-ccccb3657390 00:36:14.711 23:17:53 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:36:14.711 { 00:36:14.711 "name": "0dfc02de-1811-49cb-bcc8-ccccb3657390", 00:36:14.711 "aliases": [ 00:36:14.711 "lvs/nvme0n1p0" 00:36:14.711 ], 00:36:14.711 "product_name": "Logical Volume", 00:36:14.711 "block_size": 4096, 00:36:14.711 "num_blocks": 26476544, 00:36:14.711 "uuid": "0dfc02de-1811-49cb-bcc8-ccccb3657390", 00:36:14.711 "assigned_rate_limits": { 00:36:14.711 "rw_ios_per_sec": 0, 00:36:14.711 "rw_mbytes_per_sec": 0, 00:36:14.711 "r_mbytes_per_sec": 0, 00:36:14.711 "w_mbytes_per_sec": 0 00:36:14.711 }, 00:36:14.711 "claimed": false, 00:36:14.711 "zoned": false, 00:36:14.711 "supported_io_types": { 00:36:14.711 "read": true, 00:36:14.711 "write": true, 00:36:14.711 "unmap": true, 00:36:14.711 "flush": false, 00:36:14.711 "reset": true, 00:36:14.711 "nvme_admin": false, 00:36:14.711 "nvme_io": false, 00:36:14.711 "nvme_io_md": false, 00:36:14.711 "write_zeroes": true, 00:36:14.711 "zcopy": false, 00:36:14.711 "get_zone_info": false, 00:36:14.711 "zone_management": false, 00:36:14.711 "zone_append": false, 00:36:14.711 "compare": false, 00:36:14.711 "compare_and_write": false, 00:36:14.711 "abort": false, 00:36:14.711 "seek_hole": true, 00:36:14.711 "seek_data": true, 00:36:14.711 "copy": false, 00:36:14.711 "nvme_iov_md": false 00:36:14.711 }, 00:36:14.711 "driver_specific": { 00:36:14.711 "lvol": { 00:36:14.711 "lvol_store_uuid": "e68f867e-5095-49c3-932b-2759c08571fc", 00:36:14.711 "base_bdev": "nvme0n1", 00:36:14.711 "thin_provision": true, 00:36:14.711 "num_allocated_clusters": 0, 00:36:14.711 "snapshot": false, 00:36:14.711 "clone": false, 00:36:14.711 "esnap_clone": false 00:36:14.711 } 00:36:14.711 } 00:36:14.711 } 00:36:14.711 ]' 00:36:14.711 23:17:53 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:36:14.711 23:17:53 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:36:14.711 23:17:53 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:36:14.711 23:17:53 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:36:14.711 23:17:53 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:36:14.711 23:17:53 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:36:14.711 23:17:53 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:36:14.711 23:17:53 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 0dfc02de-1811-49cb-bcc8-ccccb3657390 --l2p_dram_limit 10' 00:36:14.711 23:17:53 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:36:14.711 23:17:53 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:36:14.711 23:17:53 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:36:14.711 23:17:53 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:36:14.711 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:36:14.711 23:17:53 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 0dfc02de-1811-49cb-bcc8-ccccb3657390 --l2p_dram_limit 10 -c nvc0n1p0 00:36:14.973 [2024-12-09 23:17:53.323335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:14.973 [2024-12-09 23:17:53.323379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:36:14.973 [2024-12-09 23:17:53.323393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:36:14.973 [2024-12-09 23:17:53.323399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:14.973 [2024-12-09 23:17:53.323449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:14.973 [2024-12-09 23:17:53.323458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:36:14.973 [2024-12-09 23:17:53.323466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:36:14.973 [2024-12-09 23:17:53.323473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:14.973 [2024-12-09 23:17:53.323493] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:36:14.973 [2024-12-09 23:17:53.324108] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:36:14.973 [2024-12-09 23:17:53.324130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:14.973 [2024-12-09 23:17:53.324137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:36:14.973 [2024-12-09 23:17:53.324144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.642 ms 00:36:14.973 [2024-12-09 23:17:53.324151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:14.973 [2024-12-09 23:17:53.324202] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 37f650ee-1fad-44b0-9ad5-d0d80f0dde74 00:36:14.973 [2024-12-09 23:17:53.325151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:14.973 [2024-12-09 23:17:53.325176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:36:14.973 [2024-12-09 23:17:53.325184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:36:14.973 [2024-12-09 23:17:53.325191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:14.973 [2024-12-09 23:17:53.329938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:14.973 [2024-12-09 23:17:53.329969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:36:14.973 [2024-12-09 23:17:53.329977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.698 ms 00:36:14.973 [2024-12-09 23:17:53.329985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:14.973 [2024-12-09 23:17:53.330053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:14.973 [2024-12-09 23:17:53.330062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:36:14.973 [2024-12-09 23:17:53.330069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:36:14.973 [2024-12-09 23:17:53.330078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:14.973 [2024-12-09 23:17:53.330118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:14.973 [2024-12-09 23:17:53.330127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:36:14.973 [2024-12-09 23:17:53.330135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:36:14.974 [2024-12-09 23:17:53.330141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:14.974 [2024-12-09 23:17:53.330159] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:36:14.974 [2024-12-09 23:17:53.333073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:14.974 [2024-12-09 23:17:53.333097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:36:14.974 [2024-12-09 23:17:53.333106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.917 ms 00:36:14.974 [2024-12-09 23:17:53.333112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:14.974 [2024-12-09 23:17:53.333142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:14.974 [2024-12-09 23:17:53.333149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:36:14.974 [2024-12-09 23:17:53.333156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:36:14.974 [2024-12-09 23:17:53.333162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:14.974 [2024-12-09 23:17:53.333183] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:36:14.974 [2024-12-09 23:17:53.333306] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:36:14.974 [2024-12-09 23:17:53.333320] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:36:14.974 [2024-12-09 23:17:53.333329] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:36:14.974 [2024-12-09 23:17:53.333339] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:36:14.974 [2024-12-09 23:17:53.333346] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:36:14.974 [2024-12-09 23:17:53.333355] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:36:14.974 [2024-12-09 23:17:53.333361] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:36:14.974 [2024-12-09 23:17:53.333371] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:36:14.974 [2024-12-09 23:17:53.333377] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:36:14.974 [2024-12-09 23:17:53.333384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:14.974 [2024-12-09 23:17:53.333395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:36:14.974 [2024-12-09 23:17:53.333402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.202 ms 00:36:14.974 [2024-12-09 23:17:53.333408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:14.974 [2024-12-09 23:17:53.333485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:14.974 [2024-12-09 23:17:53.333491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:36:14.974 [2024-12-09 23:17:53.333499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:36:14.974 [2024-12-09 23:17:53.333504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:14.974 [2024-12-09 23:17:53.333584] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:36:14.974 [2024-12-09 23:17:53.333592] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:36:14.974 [2024-12-09 23:17:53.333600] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:36:14.974 [2024-12-09 23:17:53.333607] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:36:14.974 [2024-12-09 23:17:53.333615] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:36:14.974 [2024-12-09 23:17:53.333620] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:36:14.974 [2024-12-09 23:17:53.333627] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:36:14.974 [2024-12-09 23:17:53.333632] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:36:14.974 [2024-12-09 23:17:53.333639] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:36:14.974 [2024-12-09 23:17:53.333645] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:36:14.974 [2024-12-09 23:17:53.333652] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:36:14.974 [2024-12-09 23:17:53.333658] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:36:14.974 [2024-12-09 23:17:53.333664] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:36:14.974 [2024-12-09 23:17:53.333670] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:36:14.974 [2024-12-09 23:17:53.333677] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:36:14.974 [2024-12-09 23:17:53.333683] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:36:14.974 [2024-12-09 23:17:53.333691] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:36:14.974 [2024-12-09 23:17:53.333696] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:36:14.974 [2024-12-09 23:17:53.333703] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:36:14.974 [2024-12-09 23:17:53.333708] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:36:14.974 [2024-12-09 23:17:53.333715] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:36:14.974 [2024-12-09 23:17:53.333720] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:36:14.974 [2024-12-09 23:17:53.333726] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:36:14.974 [2024-12-09 23:17:53.333731] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:36:14.974 [2024-12-09 23:17:53.333738] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:36:14.974 [2024-12-09 23:17:53.333743] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:36:14.974 [2024-12-09 23:17:53.333749] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:36:14.974 [2024-12-09 23:17:53.333754] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:36:14.974 [2024-12-09 23:17:53.333761] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:36:14.974 [2024-12-09 23:17:53.333766] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:36:14.974 [2024-12-09 23:17:53.333772] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:36:14.974 [2024-12-09 23:17:53.333778] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:36:14.974 [2024-12-09 23:17:53.333785] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:36:14.974 [2024-12-09 23:17:53.333791] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:36:14.974 [2024-12-09 23:17:53.333797] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:36:14.974 [2024-12-09 23:17:53.333802] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:36:14.974 [2024-12-09 23:17:53.333809] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:36:14.974 [2024-12-09 23:17:53.333815] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:36:14.974 [2024-12-09 23:17:53.333821] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:36:14.974 [2024-12-09 23:17:53.333827] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:36:14.974 [2024-12-09 23:17:53.333833] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:36:14.974 [2024-12-09 23:17:53.333838] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:36:14.974 [2024-12-09 23:17:53.333845] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:36:14.974 [2024-12-09 23:17:53.333850] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:36:14.974 [2024-12-09 23:17:53.333857] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:36:14.974 [2024-12-09 23:17:53.333863] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:36:14.974 [2024-12-09 23:17:53.333871] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:36:14.974 [2024-12-09 23:17:53.333878] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:36:14.974 [2024-12-09 23:17:53.333886] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:36:14.974 [2024-12-09 23:17:53.333891] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:36:14.974 [2024-12-09 23:17:53.333898] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:36:14.974 [2024-12-09 23:17:53.333903] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:36:14.974 [2024-12-09 23:17:53.333910] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:36:14.974 [2024-12-09 23:17:53.333916] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:36:14.974 [2024-12-09 23:17:53.333926] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:36:14.974 [2024-12-09 23:17:53.333933] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:36:14.974 [2024-12-09 23:17:53.333940] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:36:14.974 [2024-12-09 23:17:53.333945] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:36:14.974 [2024-12-09 23:17:53.333952] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:36:14.974 [2024-12-09 23:17:53.333958] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:36:14.974 [2024-12-09 23:17:53.333965] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:36:14.974 [2024-12-09 23:17:53.333971] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:36:14.974 [2024-12-09 23:17:53.333979] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:36:14.974 [2024-12-09 23:17:53.333984] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:36:14.974 [2024-12-09 23:17:53.333993] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:36:14.974 [2024-12-09 23:17:53.333998] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:36:14.974 [2024-12-09 23:17:53.334005] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:36:14.974 [2024-12-09 23:17:53.334011] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:36:14.974 [2024-12-09 23:17:53.334018] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:36:14.974 [2024-12-09 23:17:53.334023] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:36:14.974 [2024-12-09 23:17:53.334031] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:36:14.974 [2024-12-09 23:17:53.334038] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:36:14.974 [2024-12-09 23:17:53.334045] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:36:14.975 [2024-12-09 23:17:53.334051] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:36:14.975 [2024-12-09 23:17:53.334058] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:36:14.975 [2024-12-09 23:17:53.334063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:14.975 [2024-12-09 23:17:53.334070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:36:14.975 [2024-12-09 23:17:53.334076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.536 ms 00:36:14.975 [2024-12-09 23:17:53.334083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:14.975 [2024-12-09 23:17:53.334125] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:36:14.975 [2024-12-09 23:17:53.334136] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:36:17.518 [2024-12-09 23:17:55.936468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:17.518 [2024-12-09 23:17:55.936529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:36:17.518 [2024-12-09 23:17:55.936544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2602.334 ms 00:36:17.518 [2024-12-09 23:17:55.936554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:17.518 [2024-12-09 23:17:55.962187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:17.518 [2024-12-09 23:17:55.962248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:36:17.518 [2024-12-09 23:17:55.962262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.166 ms 00:36:17.518 [2024-12-09 23:17:55.962271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:17.518 [2024-12-09 23:17:55.962398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:17.518 [2024-12-09 23:17:55.962412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:36:17.518 [2024-12-09 23:17:55.962420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:36:17.518 [2024-12-09 23:17:55.962433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:17.776 [2024-12-09 23:17:55.992689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:17.776 [2024-12-09 23:17:55.992728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:36:17.776 [2024-12-09 23:17:55.992738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.223 ms 00:36:17.776 [2024-12-09 23:17:55.992748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:17.776 [2024-12-09 23:17:55.992774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:17.776 [2024-12-09 23:17:55.992786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:36:17.776 [2024-12-09 23:17:55.992794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:36:17.776 [2024-12-09 23:17:55.992809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:17.776 [2024-12-09 23:17:55.993145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:17.776 [2024-12-09 23:17:55.993173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:36:17.776 [2024-12-09 23:17:55.993182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.281 ms 00:36:17.776 [2024-12-09 23:17:55.993192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:17.776 [2024-12-09 23:17:55.993304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:17.776 [2024-12-09 23:17:55.993315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:36:17.776 [2024-12-09 23:17:55.993324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:36:17.776 [2024-12-09 23:17:55.993335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:17.776 [2024-12-09 23:17:56.007088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:17.776 [2024-12-09 23:17:56.007124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:36:17.776 [2024-12-09 23:17:56.007134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.735 ms 00:36:17.776 [2024-12-09 23:17:56.007143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:17.776 [2024-12-09 23:17:56.041511] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:36:17.776 [2024-12-09 23:17:56.044191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:17.777 [2024-12-09 23:17:56.044234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:36:17.777 [2024-12-09 23:17:56.044250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.980 ms 00:36:17.777 [2024-12-09 23:17:56.044261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:17.777 [2024-12-09 23:17:56.103128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:17.777 [2024-12-09 23:17:56.103171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:36:17.777 [2024-12-09 23:17:56.103186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.826 ms 00:36:17.777 [2024-12-09 23:17:56.103195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:17.777 [2024-12-09 23:17:56.103374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:17.777 [2024-12-09 23:17:56.103392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:36:17.777 [2024-12-09 23:17:56.103405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.135 ms 00:36:17.777 [2024-12-09 23:17:56.103412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:17.777 [2024-12-09 23:17:56.125839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:17.777 [2024-12-09 23:17:56.125873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:36:17.777 [2024-12-09 23:17:56.125887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.383 ms 00:36:17.777 [2024-12-09 23:17:56.125895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:17.777 [2024-12-09 23:17:56.148095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:17.777 [2024-12-09 23:17:56.148128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:36:17.777 [2024-12-09 23:17:56.148141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.159 ms 00:36:17.777 [2024-12-09 23:17:56.148149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:17.777 [2024-12-09 23:17:56.148719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:17.777 [2024-12-09 23:17:56.148741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:36:17.777 [2024-12-09 23:17:56.148751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.538 ms 00:36:17.777 [2024-12-09 23:17:56.148760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:17.777 [2024-12-09 23:17:56.216657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:17.777 [2024-12-09 23:17:56.216696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:36:17.777 [2024-12-09 23:17:56.216713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 67.864 ms 00:36:17.777 [2024-12-09 23:17:56.216721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:18.037 [2024-12-09 23:17:56.240752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:18.037 [2024-12-09 23:17:56.240788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:36:18.037 [2024-12-09 23:17:56.240802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.974 ms 00:36:18.037 [2024-12-09 23:17:56.240811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:18.037 [2024-12-09 23:17:56.263821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:18.037 [2024-12-09 23:17:56.263858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:36:18.037 [2024-12-09 23:17:56.263871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.984 ms 00:36:18.037 [2024-12-09 23:17:56.263878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:18.037 [2024-12-09 23:17:56.287235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:18.037 [2024-12-09 23:17:56.287273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:36:18.037 [2024-12-09 23:17:56.287287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.331 ms 00:36:18.037 [2024-12-09 23:17:56.287295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:18.037 [2024-12-09 23:17:56.287323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:18.037 [2024-12-09 23:17:56.287332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:36:18.037 [2024-12-09 23:17:56.287344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:36:18.037 [2024-12-09 23:17:56.287351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:18.037 [2024-12-09 23:17:56.287425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:18.037 [2024-12-09 23:17:56.287436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:36:18.037 [2024-12-09 23:17:56.287445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:36:18.037 [2024-12-09 23:17:56.287453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:18.037 [2024-12-09 23:17:56.288273] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2964.524 ms, result 0 00:36:18.037 { 00:36:18.037 "name": "ftl0", 00:36:18.037 "uuid": "37f650ee-1fad-44b0-9ad5-d0d80f0dde74" 00:36:18.037 } 00:36:18.037 23:17:56 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:36:18.037 23:17:56 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:36:18.299 23:17:56 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:36:18.299 23:17:56 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:36:18.299 [2024-12-09 23:17:56.695974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:18.299 [2024-12-09 23:17:56.696027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:36:18.299 [2024-12-09 23:17:56.696041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:36:18.299 [2024-12-09 23:17:56.696051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:18.299 [2024-12-09 23:17:56.696075] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:36:18.299 [2024-12-09 23:17:56.698682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:18.299 [2024-12-09 23:17:56.698714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:36:18.299 [2024-12-09 23:17:56.698726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.589 ms 00:36:18.299 [2024-12-09 23:17:56.698734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:18.299 [2024-12-09 23:17:56.699002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:18.299 [2024-12-09 23:17:56.699021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:36:18.299 [2024-12-09 23:17:56.699031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.241 ms 00:36:18.299 [2024-12-09 23:17:56.699039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:18.299 [2024-12-09 23:17:56.702298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:18.299 [2024-12-09 23:17:56.702321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:36:18.299 [2024-12-09 23:17:56.702332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.242 ms 00:36:18.299 [2024-12-09 23:17:56.702340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:18.299 [2024-12-09 23:17:56.708425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:18.299 [2024-12-09 23:17:56.708464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:36:18.299 [2024-12-09 23:17:56.708479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.065 ms 00:36:18.299 [2024-12-09 23:17:56.708487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:18.299 [2024-12-09 23:17:56.731723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:18.299 [2024-12-09 23:17:56.731756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:36:18.299 [2024-12-09 23:17:56.731769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.168 ms 00:36:18.299 [2024-12-09 23:17:56.731777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:18.299 [2024-12-09 23:17:56.746963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:18.299 [2024-12-09 23:17:56.746995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:36:18.299 [2024-12-09 23:17:56.747008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.146 ms 00:36:18.299 [2024-12-09 23:17:56.747016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:18.299 [2024-12-09 23:17:56.747161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:18.299 [2024-12-09 23:17:56.747172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:36:18.299 [2024-12-09 23:17:56.747182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.108 ms 00:36:18.299 [2024-12-09 23:17:56.747189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:18.561 [2024-12-09 23:17:56.770504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:18.561 [2024-12-09 23:17:56.770533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:36:18.561 [2024-12-09 23:17:56.770545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.293 ms 00:36:18.561 [2024-12-09 23:17:56.770552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:18.561 [2024-12-09 23:17:56.793303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:18.561 [2024-12-09 23:17:56.793333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:36:18.561 [2024-12-09 23:17:56.793344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.716 ms 00:36:18.561 [2024-12-09 23:17:56.793351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:18.561 [2024-12-09 23:17:56.815330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:18.561 [2024-12-09 23:17:56.815361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:36:18.561 [2024-12-09 23:17:56.815372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.942 ms 00:36:18.561 [2024-12-09 23:17:56.815380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:18.561 [2024-12-09 23:17:56.837290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:18.561 [2024-12-09 23:17:56.837321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:36:18.561 [2024-12-09 23:17:56.837332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.829 ms 00:36:18.561 [2024-12-09 23:17:56.837339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:18.561 [2024-12-09 23:17:56.837373] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:36:18.561 [2024-12-09 23:17:56.837386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:36:18.561 [2024-12-09 23:17:56.837399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:36:18.561 [2024-12-09 23:17:56.837408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:36:18.561 [2024-12-09 23:17:56.837418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:36:18.561 [2024-12-09 23:17:56.837425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:36:18.561 [2024-12-09 23:17:56.837434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:36:18.561 [2024-12-09 23:17:56.837449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:36:18.561 [2024-12-09 23:17:56.837460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:36:18.561 [2024-12-09 23:17:56.837467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:36:18.561 [2024-12-09 23:17:56.837476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:36:18.561 [2024-12-09 23:17:56.837484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:36:18.561 [2024-12-09 23:17:56.837493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:36:18.561 [2024-12-09 23:17:56.837499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:36:18.561 [2024-12-09 23:17:56.837509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:36:18.561 [2024-12-09 23:17:56.837516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:36:18.561 [2024-12-09 23:17:56.837525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:36:18.561 [2024-12-09 23:17:56.837532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:36:18.561 [2024-12-09 23:17:56.837541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:36:18.561 [2024-12-09 23:17:56.837548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:36:18.561 [2024-12-09 23:17:56.837558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:36:18.561 [2024-12-09 23:17:56.837565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:36:18.561 [2024-12-09 23:17:56.837574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:36:18.561 [2024-12-09 23:17:56.837581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:36:18.561 [2024-12-09 23:17:56.837592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:36:18.561 [2024-12-09 23:17:56.837599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:36:18.561 [2024-12-09 23:17:56.837607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:36:18.561 [2024-12-09 23:17:56.837615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:36:18.561 [2024-12-09 23:17:56.837624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:36:18.561 [2024-12-09 23:17:56.837631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:36:18.561 [2024-12-09 23:17:56.837639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:36:18.561 [2024-12-09 23:17:56.837647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:36:18.561 [2024-12-09 23:17:56.837656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:36:18.561 [2024-12-09 23:17:56.837664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:36:18.561 [2024-12-09 23:17:56.837673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:36:18.561 [2024-12-09 23:17:56.837680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:36:18.561 [2024-12-09 23:17:56.837689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:36:18.561 [2024-12-09 23:17:56.837696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:36:18.561 [2024-12-09 23:17:56.837704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:36:18.561 [2024-12-09 23:17:56.837711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:36:18.561 [2024-12-09 23:17:56.837721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:36:18.561 [2024-12-09 23:17:56.837729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:36:18.561 [2024-12-09 23:17:56.837737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:36:18.562 [2024-12-09 23:17:56.837744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:36:18.562 [2024-12-09 23:17:56.837753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:36:18.562 [2024-12-09 23:17:56.837761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:36:18.562 [2024-12-09 23:17:56.837771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:36:18.562 [2024-12-09 23:17:56.837778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:36:18.562 [2024-12-09 23:17:56.837786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:36:18.562 [2024-12-09 23:17:56.837793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:36:18.562 [2024-12-09 23:17:56.837802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:36:18.562 [2024-12-09 23:17:56.837809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:36:18.562 [2024-12-09 23:17:56.837818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:36:18.562 [2024-12-09 23:17:56.837825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:36:18.562 [2024-12-09 23:17:56.837834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:36:18.562 [2024-12-09 23:17:56.837841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:36:18.562 [2024-12-09 23:17:56.837855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:36:18.562 [2024-12-09 23:17:56.837863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:36:18.562 [2024-12-09 23:17:56.837872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:36:18.562 [2024-12-09 23:17:56.837879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:36:18.562 [2024-12-09 23:17:56.837887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:36:18.562 [2024-12-09 23:17:56.837894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:36:18.562 [2024-12-09 23:17:56.837903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:36:18.562 [2024-12-09 23:17:56.837911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:36:18.562 [2024-12-09 23:17:56.837920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:36:18.562 [2024-12-09 23:17:56.837927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:36:18.562 [2024-12-09 23:17:56.837936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:36:18.562 [2024-12-09 23:17:56.837943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:36:18.562 [2024-12-09 23:17:56.837951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:36:18.562 [2024-12-09 23:17:56.837958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:36:18.562 [2024-12-09 23:17:56.837967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:36:18.562 [2024-12-09 23:17:56.837974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:36:18.562 [2024-12-09 23:17:56.837986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:36:18.562 [2024-12-09 23:17:56.837994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:36:18.562 [2024-12-09 23:17:56.838003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:36:18.562 [2024-12-09 23:17:56.838010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:36:18.562 [2024-12-09 23:17:56.838018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:36:18.562 [2024-12-09 23:17:56.838026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:36:18.562 [2024-12-09 23:17:56.838034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:36:18.562 [2024-12-09 23:17:56.838041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:36:18.562 [2024-12-09 23:17:56.838050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:36:18.562 [2024-12-09 23:17:56.838057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:36:18.562 [2024-12-09 23:17:56.838065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:36:18.562 [2024-12-09 23:17:56.838072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:36:18.562 [2024-12-09 23:17:56.838081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:36:18.562 [2024-12-09 23:17:56.838088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:36:18.562 [2024-12-09 23:17:56.838096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:36:18.562 [2024-12-09 23:17:56.838104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:36:18.562 [2024-12-09 23:17:56.838114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:36:18.562 [2024-12-09 23:17:56.838121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:36:18.562 [2024-12-09 23:17:56.838129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:36:18.562 [2024-12-09 23:17:56.838136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:36:18.562 [2024-12-09 23:17:56.838145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:36:18.562 [2024-12-09 23:17:56.838152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:36:18.562 [2024-12-09 23:17:56.838161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:36:18.562 [2024-12-09 23:17:56.838170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:36:18.562 [2024-12-09 23:17:56.838180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:36:18.562 [2024-12-09 23:17:56.838187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:36:18.562 [2024-12-09 23:17:56.838197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:36:18.562 [2024-12-09 23:17:56.838204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:36:18.562 [2024-12-09 23:17:56.838213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:36:18.562 [2024-12-09 23:17:56.838238] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:36:18.562 [2024-12-09 23:17:56.838248] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 37f650ee-1fad-44b0-9ad5-d0d80f0dde74 00:36:18.562 [2024-12-09 23:17:56.838255] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:36:18.562 [2024-12-09 23:17:56.838266] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:36:18.562 [2024-12-09 23:17:56.838275] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:36:18.562 [2024-12-09 23:17:56.838283] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:36:18.562 [2024-12-09 23:17:56.838290] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:36:18.562 [2024-12-09 23:17:56.838299] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:36:18.562 [2024-12-09 23:17:56.838307] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:36:18.562 [2024-12-09 23:17:56.838314] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:36:18.562 [2024-12-09 23:17:56.838321] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:36:18.562 [2024-12-09 23:17:56.838330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:18.562 [2024-12-09 23:17:56.838337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:36:18.562 [2024-12-09 23:17:56.838347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.958 ms 00:36:18.562 [2024-12-09 23:17:56.838355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:18.562 [2024-12-09 23:17:56.850865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:18.562 [2024-12-09 23:17:56.850895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:36:18.562 [2024-12-09 23:17:56.850907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.477 ms 00:36:18.562 [2024-12-09 23:17:56.850915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:18.562 [2024-12-09 23:17:56.851276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:18.562 [2024-12-09 23:17:56.851296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:36:18.562 [2024-12-09 23:17:56.851308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.337 ms 00:36:18.562 [2024-12-09 23:17:56.851315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:18.562 [2024-12-09 23:17:56.892426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:18.562 [2024-12-09 23:17:56.892459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:36:18.562 [2024-12-09 23:17:56.892472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:18.562 [2024-12-09 23:17:56.892480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:18.562 [2024-12-09 23:17:56.892537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:18.562 [2024-12-09 23:17:56.892545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:36:18.562 [2024-12-09 23:17:56.892557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:18.562 [2024-12-09 23:17:56.892564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:18.562 [2024-12-09 23:17:56.892630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:18.563 [2024-12-09 23:17:56.892640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:36:18.563 [2024-12-09 23:17:56.892649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:18.563 [2024-12-09 23:17:56.892656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:18.563 [2024-12-09 23:17:56.892676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:18.563 [2024-12-09 23:17:56.892684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:36:18.563 [2024-12-09 23:17:56.892692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:18.563 [2024-12-09 23:17:56.892701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:18.563 [2024-12-09 23:17:56.968066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:18.563 [2024-12-09 23:17:56.968110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:36:18.563 [2024-12-09 23:17:56.968123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:18.563 [2024-12-09 23:17:56.968132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:18.824 [2024-12-09 23:17:57.029688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:18.824 [2024-12-09 23:17:57.029734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:36:18.824 [2024-12-09 23:17:57.029747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:18.824 [2024-12-09 23:17:57.029757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:18.824 [2024-12-09 23:17:57.029838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:18.824 [2024-12-09 23:17:57.029847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:36:18.824 [2024-12-09 23:17:57.029856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:18.824 [2024-12-09 23:17:57.029863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:18.824 [2024-12-09 23:17:57.029909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:18.824 [2024-12-09 23:17:57.029919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:36:18.824 [2024-12-09 23:17:57.029929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:18.824 [2024-12-09 23:17:57.029936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:18.824 [2024-12-09 23:17:57.030025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:18.824 [2024-12-09 23:17:57.030035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:36:18.824 [2024-12-09 23:17:57.030044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:18.824 [2024-12-09 23:17:57.030051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:18.824 [2024-12-09 23:17:57.030083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:18.824 [2024-12-09 23:17:57.030097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:36:18.824 [2024-12-09 23:17:57.030106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:18.824 [2024-12-09 23:17:57.030114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:18.824 [2024-12-09 23:17:57.030149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:18.824 [2024-12-09 23:17:57.030159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:36:18.824 [2024-12-09 23:17:57.030168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:18.824 [2024-12-09 23:17:57.030175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:18.824 [2024-12-09 23:17:57.030237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:18.824 [2024-12-09 23:17:57.030248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:36:18.824 [2024-12-09 23:17:57.030257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:18.824 [2024-12-09 23:17:57.030264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:18.824 [2024-12-09 23:17:57.030386] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 334.384 ms, result 0 00:36:18.824 true 00:36:18.824 23:17:57 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 77441 00:36:18.824 23:17:57 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 77441 ']' 00:36:18.824 23:17:57 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 77441 00:36:18.824 23:17:57 ftl.ftl_restore -- common/autotest_common.sh@959 -- # uname 00:36:18.824 23:17:57 ftl.ftl_restore -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:18.824 23:17:57 ftl.ftl_restore -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77441 00:36:18.824 23:17:57 ftl.ftl_restore -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:18.824 23:17:57 ftl.ftl_restore -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:18.824 killing process with pid 77441 00:36:18.824 23:17:57 ftl.ftl_restore -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77441' 00:36:18.824 23:17:57 ftl.ftl_restore -- common/autotest_common.sh@973 -- # kill 77441 00:36:18.824 23:17:57 ftl.ftl_restore -- common/autotest_common.sh@978 -- # wait 77441 00:36:33.748 23:18:10 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:36:37.035 262144+0 records in 00:36:37.035 262144+0 records out 00:36:37.035 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.98055 s, 270 MB/s 00:36:37.035 23:18:14 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:36:38.956 23:18:17 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:36:38.956 [2024-12-09 23:18:17.092238] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:36:38.956 [2024-12-09 23:18:17.092334] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77664 ] 00:36:38.956 [2024-12-09 23:18:17.247383] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:38.956 [2024-12-09 23:18:17.341905] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:39.217 [2024-12-09 23:18:17.597488] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:36:39.217 [2024-12-09 23:18:17.597554] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:36:39.478 [2024-12-09 23:18:17.750360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:39.478 [2024-12-09 23:18:17.750408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:36:39.478 [2024-12-09 23:18:17.750420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:36:39.478 [2024-12-09 23:18:17.750428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:39.478 [2024-12-09 23:18:17.750473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:39.478 [2024-12-09 23:18:17.750485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:36:39.478 [2024-12-09 23:18:17.750493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:36:39.478 [2024-12-09 23:18:17.750500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:39.478 [2024-12-09 23:18:17.750516] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:36:39.478 [2024-12-09 23:18:17.751243] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:36:39.478 [2024-12-09 23:18:17.751260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:39.478 [2024-12-09 23:18:17.751267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:36:39.478 [2024-12-09 23:18:17.751276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.748 ms 00:36:39.478 [2024-12-09 23:18:17.751283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:39.478 [2024-12-09 23:18:17.752353] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:36:39.478 [2024-12-09 23:18:17.764391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:39.478 [2024-12-09 23:18:17.764425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:36:39.478 [2024-12-09 23:18:17.764436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.040 ms 00:36:39.478 [2024-12-09 23:18:17.764444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:39.478 [2024-12-09 23:18:17.764497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:39.478 [2024-12-09 23:18:17.764506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:36:39.478 [2024-12-09 23:18:17.764514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:36:39.478 [2024-12-09 23:18:17.764521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:39.478 [2024-12-09 23:18:17.769040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:39.478 [2024-12-09 23:18:17.769070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:36:39.478 [2024-12-09 23:18:17.769080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.462 ms 00:36:39.478 [2024-12-09 23:18:17.769091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:39.478 [2024-12-09 23:18:17.769154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:39.478 [2024-12-09 23:18:17.769163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:36:39.478 [2024-12-09 23:18:17.769171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:36:39.478 [2024-12-09 23:18:17.769178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:39.478 [2024-12-09 23:18:17.769235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:39.478 [2024-12-09 23:18:17.769245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:36:39.478 [2024-12-09 23:18:17.769256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:36:39.478 [2024-12-09 23:18:17.769264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:39.478 [2024-12-09 23:18:17.769290] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:36:39.478 [2024-12-09 23:18:17.772580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:39.478 [2024-12-09 23:18:17.772608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:36:39.478 [2024-12-09 23:18:17.772619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.295 ms 00:36:39.478 [2024-12-09 23:18:17.772626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:39.478 [2024-12-09 23:18:17.772654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:39.478 [2024-12-09 23:18:17.772663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:36:39.478 [2024-12-09 23:18:17.772670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:36:39.478 [2024-12-09 23:18:17.772677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:39.478 [2024-12-09 23:18:17.772695] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:36:39.478 [2024-12-09 23:18:17.772713] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:36:39.478 [2024-12-09 23:18:17.772746] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:36:39.478 [2024-12-09 23:18:17.772762] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:36:39.478 [2024-12-09 23:18:17.772861] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:36:39.478 [2024-12-09 23:18:17.772872] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:36:39.478 [2024-12-09 23:18:17.772882] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:36:39.478 [2024-12-09 23:18:17.772892] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:36:39.478 [2024-12-09 23:18:17.772900] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:36:39.478 [2024-12-09 23:18:17.772908] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:36:39.478 [2024-12-09 23:18:17.772915] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:36:39.478 [2024-12-09 23:18:17.772924] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:36:39.478 [2024-12-09 23:18:17.772931] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:36:39.478 [2024-12-09 23:18:17.772938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:39.478 [2024-12-09 23:18:17.772946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:36:39.478 [2024-12-09 23:18:17.772953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.245 ms 00:36:39.478 [2024-12-09 23:18:17.772960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:39.478 [2024-12-09 23:18:17.773041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:39.478 [2024-12-09 23:18:17.773049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:36:39.478 [2024-12-09 23:18:17.773056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:36:39.478 [2024-12-09 23:18:17.773062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:39.478 [2024-12-09 23:18:17.773158] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:36:39.478 [2024-12-09 23:18:17.773168] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:36:39.478 [2024-12-09 23:18:17.773176] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:36:39.478 [2024-12-09 23:18:17.773183] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:36:39.478 [2024-12-09 23:18:17.773190] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:36:39.478 [2024-12-09 23:18:17.773197] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:36:39.478 [2024-12-09 23:18:17.773203] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:36:39.478 [2024-12-09 23:18:17.773210] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:36:39.478 [2024-12-09 23:18:17.773226] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:36:39.478 [2024-12-09 23:18:17.773233] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:36:39.478 [2024-12-09 23:18:17.773240] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:36:39.478 [2024-12-09 23:18:17.773246] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:36:39.478 [2024-12-09 23:18:17.773253] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:36:39.478 [2024-12-09 23:18:17.773265] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:36:39.478 [2024-12-09 23:18:17.773272] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:36:39.478 [2024-12-09 23:18:17.773279] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:36:39.478 [2024-12-09 23:18:17.773285] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:36:39.478 [2024-12-09 23:18:17.773292] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:36:39.478 [2024-12-09 23:18:17.773298] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:36:39.478 [2024-12-09 23:18:17.773305] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:36:39.478 [2024-12-09 23:18:17.773311] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:36:39.478 [2024-12-09 23:18:17.773318] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:36:39.478 [2024-12-09 23:18:17.773324] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:36:39.478 [2024-12-09 23:18:17.773330] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:36:39.478 [2024-12-09 23:18:17.773337] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:36:39.478 [2024-12-09 23:18:17.773344] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:36:39.478 [2024-12-09 23:18:17.773350] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:36:39.478 [2024-12-09 23:18:17.773356] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:36:39.478 [2024-12-09 23:18:17.773362] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:36:39.478 [2024-12-09 23:18:17.773368] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:36:39.478 [2024-12-09 23:18:17.773375] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:36:39.478 [2024-12-09 23:18:17.773381] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:36:39.479 [2024-12-09 23:18:17.773387] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:36:39.479 [2024-12-09 23:18:17.773393] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:36:39.479 [2024-12-09 23:18:17.773399] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:36:39.479 [2024-12-09 23:18:17.773405] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:36:39.479 [2024-12-09 23:18:17.773411] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:36:39.479 [2024-12-09 23:18:17.773417] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:36:39.479 [2024-12-09 23:18:17.773423] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:36:39.479 [2024-12-09 23:18:17.773429] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:36:39.479 [2024-12-09 23:18:17.773443] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:36:39.479 [2024-12-09 23:18:17.773450] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:36:39.479 [2024-12-09 23:18:17.773456] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:36:39.479 [2024-12-09 23:18:17.773463] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:36:39.479 [2024-12-09 23:18:17.773470] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:36:39.479 [2024-12-09 23:18:17.773477] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:36:39.479 [2024-12-09 23:18:17.773485] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:36:39.479 [2024-12-09 23:18:17.773492] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:36:39.479 [2024-12-09 23:18:17.773498] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:36:39.479 [2024-12-09 23:18:17.773504] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:36:39.479 [2024-12-09 23:18:17.773511] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:36:39.479 [2024-12-09 23:18:17.773517] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:36:39.479 [2024-12-09 23:18:17.773523] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:36:39.479 [2024-12-09 23:18:17.773531] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:36:39.479 [2024-12-09 23:18:17.773540] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:36:39.479 [2024-12-09 23:18:17.773550] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:36:39.479 [2024-12-09 23:18:17.773558] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:36:39.479 [2024-12-09 23:18:17.773565] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:36:39.479 [2024-12-09 23:18:17.773573] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:36:39.479 [2024-12-09 23:18:17.773580] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:36:39.479 [2024-12-09 23:18:17.773587] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:36:39.479 [2024-12-09 23:18:17.773594] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:36:39.479 [2024-12-09 23:18:17.773601] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:36:39.479 [2024-12-09 23:18:17.773608] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:36:39.479 [2024-12-09 23:18:17.773615] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:36:39.479 [2024-12-09 23:18:17.773621] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:36:39.479 [2024-12-09 23:18:17.773628] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:36:39.479 [2024-12-09 23:18:17.773634] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:36:39.479 [2024-12-09 23:18:17.773641] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:36:39.479 [2024-12-09 23:18:17.773648] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:36:39.479 [2024-12-09 23:18:17.773655] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:36:39.479 [2024-12-09 23:18:17.773663] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:36:39.479 [2024-12-09 23:18:17.773670] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:36:39.479 [2024-12-09 23:18:17.773677] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:36:39.479 [2024-12-09 23:18:17.773684] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:36:39.479 [2024-12-09 23:18:17.773691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:39.479 [2024-12-09 23:18:17.773698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:36:39.479 [2024-12-09 23:18:17.773705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.601 ms 00:36:39.479 [2024-12-09 23:18:17.773713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:39.479 [2024-12-09 23:18:17.799116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:39.479 [2024-12-09 23:18:17.799149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:36:39.479 [2024-12-09 23:18:17.799159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.351 ms 00:36:39.479 [2024-12-09 23:18:17.799169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:39.479 [2024-12-09 23:18:17.799259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:39.479 [2024-12-09 23:18:17.799268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:36:39.479 [2024-12-09 23:18:17.799276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:36:39.479 [2024-12-09 23:18:17.799283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:39.479 [2024-12-09 23:18:17.844931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:39.479 [2024-12-09 23:18:17.844970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:36:39.479 [2024-12-09 23:18:17.844982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.600 ms 00:36:39.479 [2024-12-09 23:18:17.844989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:39.479 [2024-12-09 23:18:17.845026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:39.479 [2024-12-09 23:18:17.845036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:36:39.479 [2024-12-09 23:18:17.845047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:36:39.479 [2024-12-09 23:18:17.845055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:39.479 [2024-12-09 23:18:17.845420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:39.479 [2024-12-09 23:18:17.845443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:36:39.479 [2024-12-09 23:18:17.845452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.305 ms 00:36:39.479 [2024-12-09 23:18:17.845459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:39.479 [2024-12-09 23:18:17.845580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:39.479 [2024-12-09 23:18:17.845589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:36:39.479 [2024-12-09 23:18:17.845599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:36:39.479 [2024-12-09 23:18:17.845606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:39.479 [2024-12-09 23:18:17.858249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:39.479 [2024-12-09 23:18:17.858280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:36:39.479 [2024-12-09 23:18:17.858290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.624 ms 00:36:39.479 [2024-12-09 23:18:17.858297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:39.479 [2024-12-09 23:18:17.870415] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:36:39.479 [2024-12-09 23:18:17.870448] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:36:39.479 [2024-12-09 23:18:17.870459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:39.479 [2024-12-09 23:18:17.870467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:36:39.479 [2024-12-09 23:18:17.870476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.080 ms 00:36:39.479 [2024-12-09 23:18:17.870483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:39.479 [2024-12-09 23:18:17.894638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:39.479 [2024-12-09 23:18:17.894683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:36:39.479 [2024-12-09 23:18:17.894695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.119 ms 00:36:39.479 [2024-12-09 23:18:17.894702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:39.479 [2024-12-09 23:18:17.908211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:39.479 [2024-12-09 23:18:17.908251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:36:39.479 [2024-12-09 23:18:17.908260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.443 ms 00:36:39.479 [2024-12-09 23:18:17.908267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:39.479 [2024-12-09 23:18:17.919392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:39.479 [2024-12-09 23:18:17.919424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:36:39.479 [2024-12-09 23:18:17.919434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.093 ms 00:36:39.479 [2024-12-09 23:18:17.919441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:39.479 [2024-12-09 23:18:17.920019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:39.479 [2024-12-09 23:18:17.920044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:36:39.479 [2024-12-09 23:18:17.920053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.502 ms 00:36:39.479 [2024-12-09 23:18:17.920063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:39.738 [2024-12-09 23:18:17.974050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:39.738 [2024-12-09 23:18:17.974096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:36:39.738 [2024-12-09 23:18:17.974108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.970 ms 00:36:39.738 [2024-12-09 23:18:17.974120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:39.738 [2024-12-09 23:18:17.985055] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:36:39.738 [2024-12-09 23:18:17.987569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:39.738 [2024-12-09 23:18:17.987604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:36:39.738 [2024-12-09 23:18:17.987617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.409 ms 00:36:39.738 [2024-12-09 23:18:17.987626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:39.738 [2024-12-09 23:18:17.987719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:39.738 [2024-12-09 23:18:17.987730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:36:39.738 [2024-12-09 23:18:17.987738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:36:39.738 [2024-12-09 23:18:17.987745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:39.738 [2024-12-09 23:18:17.987811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:39.738 [2024-12-09 23:18:17.987821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:36:39.738 [2024-12-09 23:18:17.987829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:36:39.738 [2024-12-09 23:18:17.987836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:39.738 [2024-12-09 23:18:17.987853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:39.738 [2024-12-09 23:18:17.987861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:36:39.738 [2024-12-09 23:18:17.987868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:36:39.738 [2024-12-09 23:18:17.987875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:39.738 [2024-12-09 23:18:17.987904] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:36:39.738 [2024-12-09 23:18:17.987915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:39.738 [2024-12-09 23:18:17.987923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:36:39.738 [2024-12-09 23:18:17.987930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:36:39.738 [2024-12-09 23:18:17.987938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:39.738 [2024-12-09 23:18:18.011544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:39.738 [2024-12-09 23:18:18.011582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:36:39.738 [2024-12-09 23:18:18.011593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.588 ms 00:36:39.738 [2024-12-09 23:18:18.011606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:39.738 [2024-12-09 23:18:18.011673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:39.738 [2024-12-09 23:18:18.011682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:36:39.738 [2024-12-09 23:18:18.011690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:36:39.738 [2024-12-09 23:18:18.011697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:39.738 [2024-12-09 23:18:18.012887] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 262.121 ms, result 0 00:36:40.676  [2024-12-09T23:18:20.078Z] Copying: 37/1024 [MB] (37 MBps) [2024-12-09T23:18:21.461Z] Copying: 80/1024 [MB] (43 MBps) [2024-12-09T23:18:22.029Z] Copying: 118/1024 [MB] (37 MBps) [2024-12-09T23:18:23.414Z] Copying: 150/1024 [MB] (32 MBps) [2024-12-09T23:18:24.353Z] Copying: 181/1024 [MB] (31 MBps) [2024-12-09T23:18:25.298Z] Copying: 211/1024 [MB] (29 MBps) [2024-12-09T23:18:26.237Z] Copying: 248/1024 [MB] (37 MBps) [2024-12-09T23:18:27.182Z] Copying: 290/1024 [MB] (41 MBps) [2024-12-09T23:18:28.152Z] Copying: 328/1024 [MB] (37 MBps) [2024-12-09T23:18:29.095Z] Copying: 369/1024 [MB] (41 MBps) [2024-12-09T23:18:30.037Z] Copying: 412/1024 [MB] (42 MBps) [2024-12-09T23:18:31.424Z] Copying: 455/1024 [MB] (43 MBps) [2024-12-09T23:18:32.368Z] Copying: 495/1024 [MB] (39 MBps) [2024-12-09T23:18:33.310Z] Copying: 525/1024 [MB] (29 MBps) [2024-12-09T23:18:34.252Z] Copying: 543/1024 [MB] (18 MBps) [2024-12-09T23:18:35.194Z] Copying: 559/1024 [MB] (16 MBps) [2024-12-09T23:18:36.134Z] Copying: 573/1024 [MB] (14 MBps) [2024-12-09T23:18:37.143Z] Copying: 587/1024 [MB] (13 MBps) [2024-12-09T23:18:38.086Z] Copying: 599/1024 [MB] (11 MBps) [2024-12-09T23:18:39.466Z] Copying: 612/1024 [MB] (13 MBps) [2024-12-09T23:18:40.033Z] Copying: 626/1024 [MB] (13 MBps) [2024-12-09T23:18:41.418Z] Copying: 637/1024 [MB] (11 MBps) [2024-12-09T23:18:42.359Z] Copying: 650/1024 [MB] (12 MBps) [2024-12-09T23:18:43.307Z] Copying: 661/1024 [MB] (10 MBps) [2024-12-09T23:18:44.248Z] Copying: 686860/1048576 [kB] (9996 kBps) [2024-12-09T23:18:45.179Z] Copying: 681/1024 [MB] (10 MBps) [2024-12-09T23:18:46.112Z] Copying: 694/1024 [MB] (12 MBps) [2024-12-09T23:18:47.046Z] Copying: 728334336/1073741824 [B] (0 Bps) [2024-12-09T23:18:48.428Z] Copying: 720616/1048576 [kB] (9352 kBps) [2024-12-09T23:18:49.371Z] Copying: 730728/1048576 [kB] (10112 kBps) [2024-12-09T23:18:50.313Z] Copying: 723/1024 [MB] (10 MBps) [2024-12-09T23:18:51.252Z] Copying: 751208/1048576 [kB] (10072 kBps) [2024-12-09T23:18:52.193Z] Copying: 761224/1048576 [kB] (10016 kBps) [2024-12-09T23:18:53.133Z] Copying: 770360/1048576 [kB] (9136 kBps) [2024-12-09T23:18:54.076Z] Copying: 780544/1048576 [kB] (10184 kBps) [2024-12-09T23:18:55.507Z] Copying: 789964/1048576 [kB] (9420 kBps) [2024-12-09T23:18:56.092Z] Copying: 799488/1048576 [kB] (9524 kBps) [2024-12-09T23:18:57.038Z] Copying: 808864/1048576 [kB] (9376 kBps) [2024-12-09T23:18:58.423Z] Copying: 801/1024 [MB] (11 MBps) [2024-12-09T23:18:59.365Z] Copying: 822/1024 [MB] (21 MBps) [2024-12-09T23:19:00.307Z] Copying: 835/1024 [MB] (12 MBps) [2024-12-09T23:19:01.247Z] Copying: 845/1024 [MB] (10 MBps) [2024-12-09T23:19:02.189Z] Copying: 867/1024 [MB] (22 MBps) [2024-12-09T23:19:03.132Z] Copying: 886/1024 [MB] (18 MBps) [2024-12-09T23:19:04.075Z] Copying: 901/1024 [MB] (15 MBps) [2024-12-09T23:19:05.460Z] Copying: 920/1024 [MB] (19 MBps) [2024-12-09T23:19:06.033Z] Copying: 935/1024 [MB] (14 MBps) [2024-12-09T23:19:07.429Z] Copying: 947/1024 [MB] (12 MBps) [2024-12-09T23:19:08.373Z] Copying: 963/1024 [MB] (15 MBps) [2024-12-09T23:19:09.342Z] Copying: 996456/1048576 [kB] (10208 kBps) [2024-12-09T23:19:10.299Z] Copying: 985/1024 [MB] (12 MBps) [2024-12-09T23:19:11.243Z] Copying: 999/1024 [MB] (14 MBps) [2024-12-09T23:19:12.186Z] Copying: 1010/1024 [MB] (10 MBps) [2024-12-09T23:19:12.448Z] Copying: 1020/1024 [MB] (10 MBps) [2024-12-09T23:19:12.448Z] Copying: 1024/1024 [MB] (average 18 MBps)[2024-12-09 23:19:12.269727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:33.986 [2024-12-09 23:19:12.269792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:37:33.986 [2024-12-09 23:19:12.269809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:37:33.986 [2024-12-09 23:19:12.269818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:33.986 [2024-12-09 23:19:12.269841] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:37:33.986 [2024-12-09 23:19:12.273043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:33.986 [2024-12-09 23:19:12.273087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:37:33.986 [2024-12-09 23:19:12.273109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.184 ms 00:37:33.986 [2024-12-09 23:19:12.273118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:33.986 [2024-12-09 23:19:12.276666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:33.986 [2024-12-09 23:19:12.276708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:37:33.986 [2024-12-09 23:19:12.276719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.517 ms 00:37:33.986 [2024-12-09 23:19:12.276727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:33.986 [2024-12-09 23:19:12.297577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:33.986 [2024-12-09 23:19:12.297632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:37:33.986 [2024-12-09 23:19:12.297646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.832 ms 00:37:33.986 [2024-12-09 23:19:12.297654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:33.986 [2024-12-09 23:19:12.303840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:33.986 [2024-12-09 23:19:12.303885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:37:33.986 [2024-12-09 23:19:12.303898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.135 ms 00:37:33.986 [2024-12-09 23:19:12.303907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:33.986 [2024-12-09 23:19:12.331159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:33.986 [2024-12-09 23:19:12.331223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:37:33.986 [2024-12-09 23:19:12.331238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.189 ms 00:37:33.986 [2024-12-09 23:19:12.331246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:33.986 [2024-12-09 23:19:12.347819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:33.986 [2024-12-09 23:19:12.347876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:37:33.986 [2024-12-09 23:19:12.347892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.523 ms 00:37:33.986 [2024-12-09 23:19:12.347902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:33.986 [2024-12-09 23:19:12.348062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:33.986 [2024-12-09 23:19:12.348077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:37:33.986 [2024-12-09 23:19:12.348088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:37:33.986 [2024-12-09 23:19:12.348096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:33.986 [2024-12-09 23:19:12.373830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:33.986 [2024-12-09 23:19:12.373881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:37:33.986 [2024-12-09 23:19:12.373894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.718 ms 00:37:33.986 [2024-12-09 23:19:12.373902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:33.986 [2024-12-09 23:19:12.398947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:33.986 [2024-12-09 23:19:12.399000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:37:33.986 [2024-12-09 23:19:12.399015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.998 ms 00:37:33.986 [2024-12-09 23:19:12.399024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:33.986 [2024-12-09 23:19:12.424105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:33.986 [2024-12-09 23:19:12.424156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:37:33.986 [2024-12-09 23:19:12.424169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.033 ms 00:37:33.986 [2024-12-09 23:19:12.424177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:34.248 [2024-12-09 23:19:12.448801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:34.249 [2024-12-09 23:19:12.448851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:37:34.249 [2024-12-09 23:19:12.448863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.536 ms 00:37:34.249 [2024-12-09 23:19:12.448871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:34.249 [2024-12-09 23:19:12.448918] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:37:34.249 [2024-12-09 23:19:12.448935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:37:34.249 [2024-12-09 23:19:12.448953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:37:34.249 [2024-12-09 23:19:12.448962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:37:34.249 [2024-12-09 23:19:12.448971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:37:34.249 [2024-12-09 23:19:12.448979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:37:34.249 [2024-12-09 23:19:12.448987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:37:34.249 [2024-12-09 23:19:12.448995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:37:34.249 [2024-12-09 23:19:12.449003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:37:34.249 [2024-12-09 23:19:12.449012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:37:34.249 [2024-12-09 23:19:12.449020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:37:34.249 [2024-12-09 23:19:12.449027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:37:34.249 [2024-12-09 23:19:12.449036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:37:34.249 [2024-12-09 23:19:12.449043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:37:34.249 [2024-12-09 23:19:12.449050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:37:34.249 [2024-12-09 23:19:12.449058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:37:34.249 [2024-12-09 23:19:12.449069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:37:34.249 [2024-12-09 23:19:12.449077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:37:34.249 [2024-12-09 23:19:12.449084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:37:34.249 [2024-12-09 23:19:12.449091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:37:34.249 [2024-12-09 23:19:12.449098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:37:34.249 [2024-12-09 23:19:12.449106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:37:34.249 [2024-12-09 23:19:12.449113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:37:34.249 [2024-12-09 23:19:12.449121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:37:34.249 [2024-12-09 23:19:12.449128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:37:34.249 [2024-12-09 23:19:12.449135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:37:34.249 [2024-12-09 23:19:12.449143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:37:34.249 [2024-12-09 23:19:12.449150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:37:34.249 [2024-12-09 23:19:12.449157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:37:34.249 [2024-12-09 23:19:12.449165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:37:34.249 [2024-12-09 23:19:12.449172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:37:34.249 [2024-12-09 23:19:12.449181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:37:34.249 [2024-12-09 23:19:12.449188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:37:34.249 [2024-12-09 23:19:12.449196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:37:34.249 [2024-12-09 23:19:12.449204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:37:34.249 [2024-12-09 23:19:12.449212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:37:34.249 [2024-12-09 23:19:12.449233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:37:34.249 [2024-12-09 23:19:12.449241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:37:34.249 [2024-12-09 23:19:12.449248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:37:34.249 [2024-12-09 23:19:12.449256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:37:34.249 [2024-12-09 23:19:12.449263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:37:34.249 [2024-12-09 23:19:12.449272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:37:34.249 [2024-12-09 23:19:12.449280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:37:34.249 [2024-12-09 23:19:12.449288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:37:34.249 [2024-12-09 23:19:12.449296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:37:34.249 [2024-12-09 23:19:12.449304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:37:34.249 [2024-12-09 23:19:12.449313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:37:34.249 [2024-12-09 23:19:12.449322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:37:34.249 [2024-12-09 23:19:12.449332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:37:34.249 [2024-12-09 23:19:12.449341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:37:34.249 [2024-12-09 23:19:12.449349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:37:34.249 [2024-12-09 23:19:12.449357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:37:34.249 [2024-12-09 23:19:12.449365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:37:34.249 [2024-12-09 23:19:12.449373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:37:34.249 [2024-12-09 23:19:12.449380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:37:34.249 [2024-12-09 23:19:12.449388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:37:34.249 [2024-12-09 23:19:12.449396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:37:34.249 [2024-12-09 23:19:12.449403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:37:34.249 [2024-12-09 23:19:12.449410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:37:34.249 [2024-12-09 23:19:12.449418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:37:34.249 [2024-12-09 23:19:12.449446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:37:34.249 [2024-12-09 23:19:12.449455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:37:34.249 [2024-12-09 23:19:12.449462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:37:34.249 [2024-12-09 23:19:12.449471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:37:34.249 [2024-12-09 23:19:12.449478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:37:34.249 [2024-12-09 23:19:12.449486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:37:34.249 [2024-12-09 23:19:12.449494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:37:34.249 [2024-12-09 23:19:12.449502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:37:34.249 [2024-12-09 23:19:12.449509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:37:34.249 [2024-12-09 23:19:12.449517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:37:34.249 [2024-12-09 23:19:12.449525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:37:34.249 [2024-12-09 23:19:12.449532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:37:34.249 [2024-12-09 23:19:12.449540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:37:34.249 [2024-12-09 23:19:12.449548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:37:34.249 [2024-12-09 23:19:12.449556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:37:34.249 [2024-12-09 23:19:12.449564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:37:34.249 [2024-12-09 23:19:12.449572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:37:34.249 [2024-12-09 23:19:12.449579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:37:34.249 [2024-12-09 23:19:12.449587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:37:34.249 [2024-12-09 23:19:12.449595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:37:34.249 [2024-12-09 23:19:12.449604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:37:34.249 [2024-12-09 23:19:12.449612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:37:34.249 [2024-12-09 23:19:12.449621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:37:34.249 [2024-12-09 23:19:12.449628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:37:34.249 [2024-12-09 23:19:12.449637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:37:34.249 [2024-12-09 23:19:12.449645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:37:34.249 [2024-12-09 23:19:12.449653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:37:34.250 [2024-12-09 23:19:12.449661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:37:34.250 [2024-12-09 23:19:12.449669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:37:34.250 [2024-12-09 23:19:12.449676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:37:34.250 [2024-12-09 23:19:12.449684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:37:34.250 [2024-12-09 23:19:12.449691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:37:34.250 [2024-12-09 23:19:12.449699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:37:34.250 [2024-12-09 23:19:12.449707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:37:34.250 [2024-12-09 23:19:12.449715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:37:34.250 [2024-12-09 23:19:12.449723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:37:34.250 [2024-12-09 23:19:12.449731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:37:34.250 [2024-12-09 23:19:12.449738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:37:34.250 [2024-12-09 23:19:12.449746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:37:34.250 [2024-12-09 23:19:12.449754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:37:34.250 [2024-12-09 23:19:12.449762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:37:34.250 [2024-12-09 23:19:12.449779] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:37:34.250 [2024-12-09 23:19:12.449791] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 37f650ee-1fad-44b0-9ad5-d0d80f0dde74 00:37:34.250 [2024-12-09 23:19:12.449799] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:37:34.250 [2024-12-09 23:19:12.449807] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:37:34.250 [2024-12-09 23:19:12.449814] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:37:34.250 [2024-12-09 23:19:12.449822] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:37:34.250 [2024-12-09 23:19:12.449828] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:37:34.250 [2024-12-09 23:19:12.449844] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:37:34.250 [2024-12-09 23:19:12.449853] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:37:34.250 [2024-12-09 23:19:12.449860] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:37:34.250 [2024-12-09 23:19:12.449866] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:37:34.250 [2024-12-09 23:19:12.449874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:34.250 [2024-12-09 23:19:12.449882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:37:34.250 [2024-12-09 23:19:12.449892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.957 ms 00:37:34.250 [2024-12-09 23:19:12.449901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:34.250 [2024-12-09 23:19:12.463786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:34.250 [2024-12-09 23:19:12.463834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:37:34.250 [2024-12-09 23:19:12.463846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.860 ms 00:37:34.250 [2024-12-09 23:19:12.463854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:34.250 [2024-12-09 23:19:12.464304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:34.250 [2024-12-09 23:19:12.464325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:37:34.250 [2024-12-09 23:19:12.464335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.413 ms 00:37:34.250 [2024-12-09 23:19:12.464351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:34.250 [2024-12-09 23:19:12.501248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:34.250 [2024-12-09 23:19:12.501300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:37:34.250 [2024-12-09 23:19:12.501312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:34.250 [2024-12-09 23:19:12.501321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:34.250 [2024-12-09 23:19:12.501395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:34.250 [2024-12-09 23:19:12.501404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:37:34.250 [2024-12-09 23:19:12.501414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:34.250 [2024-12-09 23:19:12.501438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:34.250 [2024-12-09 23:19:12.501533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:34.250 [2024-12-09 23:19:12.501545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:37:34.250 [2024-12-09 23:19:12.501554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:34.250 [2024-12-09 23:19:12.501562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:34.250 [2024-12-09 23:19:12.501579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:34.250 [2024-12-09 23:19:12.501588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:37:34.250 [2024-12-09 23:19:12.501597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:34.250 [2024-12-09 23:19:12.501605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:34.250 [2024-12-09 23:19:12.590633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:34.250 [2024-12-09 23:19:12.590703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:37:34.250 [2024-12-09 23:19:12.590719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:34.250 [2024-12-09 23:19:12.590729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:34.250 [2024-12-09 23:19:12.670151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:34.250 [2024-12-09 23:19:12.670274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:37:34.250 [2024-12-09 23:19:12.670295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:34.250 [2024-12-09 23:19:12.670315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:34.250 [2024-12-09 23:19:12.670422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:34.250 [2024-12-09 23:19:12.670437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:37:34.250 [2024-12-09 23:19:12.670451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:34.250 [2024-12-09 23:19:12.670465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:34.250 [2024-12-09 23:19:12.670553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:34.250 [2024-12-09 23:19:12.670568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:37:34.250 [2024-12-09 23:19:12.670582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:34.250 [2024-12-09 23:19:12.670593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:34.250 [2024-12-09 23:19:12.670745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:34.250 [2024-12-09 23:19:12.670759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:37:34.250 [2024-12-09 23:19:12.670771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:34.250 [2024-12-09 23:19:12.670784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:34.250 [2024-12-09 23:19:12.670836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:34.250 [2024-12-09 23:19:12.670852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:37:34.250 [2024-12-09 23:19:12.670865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:34.250 [2024-12-09 23:19:12.670876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:34.250 [2024-12-09 23:19:12.670938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:34.250 [2024-12-09 23:19:12.670979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:37:34.250 [2024-12-09 23:19:12.670993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:34.250 [2024-12-09 23:19:12.671006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:34.250 [2024-12-09 23:19:12.671079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:34.250 [2024-12-09 23:19:12.671093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:37:34.250 [2024-12-09 23:19:12.671105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:34.250 [2024-12-09 23:19:12.671118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:34.250 [2024-12-09 23:19:12.671348] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 401.555 ms, result 0 00:37:36.805 00:37:36.805 00:37:36.805 23:19:14 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:37:36.805 [2024-12-09 23:19:14.934763] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:37:36.805 [2024-12-09 23:19:14.934918] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78251 ] 00:37:36.805 [2024-12-09 23:19:15.098951] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:36.805 [2024-12-09 23:19:15.235831] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:37.379 [2024-12-09 23:19:15.541371] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:37:37.379 [2024-12-09 23:19:15.541491] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:37:37.379 [2024-12-09 23:19:15.705081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:37.379 [2024-12-09 23:19:15.705158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:37:37.379 [2024-12-09 23:19:15.705174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:37:37.379 [2024-12-09 23:19:15.705183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:37.379 [2024-12-09 23:19:15.705263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:37.379 [2024-12-09 23:19:15.705277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:37:37.379 [2024-12-09 23:19:15.705287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:37:37.379 [2024-12-09 23:19:15.705295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:37.379 [2024-12-09 23:19:15.705318] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:37:37.379 [2024-12-09 23:19:15.706046] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:37:37.379 [2024-12-09 23:19:15.706077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:37.379 [2024-12-09 23:19:15.706085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:37:37.379 [2024-12-09 23:19:15.706094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.765 ms 00:37:37.379 [2024-12-09 23:19:15.706103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:37.379 [2024-12-09 23:19:15.707839] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:37:37.379 [2024-12-09 23:19:15.722350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:37.379 [2024-12-09 23:19:15.722405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:37:37.379 [2024-12-09 23:19:15.722419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.514 ms 00:37:37.379 [2024-12-09 23:19:15.722429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:37.380 [2024-12-09 23:19:15.722514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:37.380 [2024-12-09 23:19:15.722524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:37:37.380 [2024-12-09 23:19:15.722533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:37:37.380 [2024-12-09 23:19:15.722541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:37.380 [2024-12-09 23:19:15.731236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:37.380 [2024-12-09 23:19:15.731282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:37:37.380 [2024-12-09 23:19:15.731292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.614 ms 00:37:37.380 [2024-12-09 23:19:15.731306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:37.380 [2024-12-09 23:19:15.731389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:37.380 [2024-12-09 23:19:15.731398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:37:37.380 [2024-12-09 23:19:15.731407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:37:37.380 [2024-12-09 23:19:15.731414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:37.380 [2024-12-09 23:19:15.731460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:37.380 [2024-12-09 23:19:15.731472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:37:37.380 [2024-12-09 23:19:15.731480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:37:37.380 [2024-12-09 23:19:15.731488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:37.380 [2024-12-09 23:19:15.731516] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:37:37.380 [2024-12-09 23:19:15.735723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:37.380 [2024-12-09 23:19:15.735768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:37:37.380 [2024-12-09 23:19:15.735783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.213 ms 00:37:37.380 [2024-12-09 23:19:15.735793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:37.380 [2024-12-09 23:19:15.735833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:37.380 [2024-12-09 23:19:15.735843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:37:37.380 [2024-12-09 23:19:15.735853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:37:37.380 [2024-12-09 23:19:15.735862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:37.380 [2024-12-09 23:19:15.735918] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:37:37.380 [2024-12-09 23:19:15.735944] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:37:37.380 [2024-12-09 23:19:15.735985] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:37:37.380 [2024-12-09 23:19:15.736007] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:37:37.380 [2024-12-09 23:19:15.736116] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:37:37.380 [2024-12-09 23:19:15.736129] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:37:37.380 [2024-12-09 23:19:15.736141] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:37:37.380 [2024-12-09 23:19:15.736153] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:37:37.380 [2024-12-09 23:19:15.736163] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:37:37.380 [2024-12-09 23:19:15.736172] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:37:37.380 [2024-12-09 23:19:15.736181] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:37:37.380 [2024-12-09 23:19:15.736194] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:37:37.380 [2024-12-09 23:19:15.736202] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:37:37.380 [2024-12-09 23:19:15.736213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:37.380 [2024-12-09 23:19:15.736238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:37:37.380 [2024-12-09 23:19:15.736248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.297 ms 00:37:37.380 [2024-12-09 23:19:15.736257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:37.380 [2024-12-09 23:19:15.736342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:37.380 [2024-12-09 23:19:15.736362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:37:37.380 [2024-12-09 23:19:15.736371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:37:37.380 [2024-12-09 23:19:15.736381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:37.380 [2024-12-09 23:19:15.736485] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:37:37.380 [2024-12-09 23:19:15.736503] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:37:37.380 [2024-12-09 23:19:15.736514] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:37:37.380 [2024-12-09 23:19:15.736523] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:37:37.380 [2024-12-09 23:19:15.736533] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:37:37.380 [2024-12-09 23:19:15.736541] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:37:37.380 [2024-12-09 23:19:15.736549] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:37:37.380 [2024-12-09 23:19:15.736558] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:37:37.380 [2024-12-09 23:19:15.736567] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:37:37.380 [2024-12-09 23:19:15.736575] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:37:37.380 [2024-12-09 23:19:15.736583] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:37:37.380 [2024-12-09 23:19:15.736591] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:37:37.380 [2024-12-09 23:19:15.736599] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:37:37.380 [2024-12-09 23:19:15.736614] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:37:37.380 [2024-12-09 23:19:15.736623] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:37:37.380 [2024-12-09 23:19:15.736630] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:37:37.380 [2024-12-09 23:19:15.736637] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:37:37.380 [2024-12-09 23:19:15.736643] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:37:37.380 [2024-12-09 23:19:15.736650] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:37:37.380 [2024-12-09 23:19:15.736657] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:37:37.380 [2024-12-09 23:19:15.736665] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:37:37.380 [2024-12-09 23:19:15.736672] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:37:37.380 [2024-12-09 23:19:15.736679] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:37:37.380 [2024-12-09 23:19:15.736686] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:37:37.380 [2024-12-09 23:19:15.736692] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:37:37.380 [2024-12-09 23:19:15.736699] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:37:37.380 [2024-12-09 23:19:15.736706] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:37:37.380 [2024-12-09 23:19:15.736713] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:37:37.380 [2024-12-09 23:19:15.736720] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:37:37.380 [2024-12-09 23:19:15.736727] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:37:37.380 [2024-12-09 23:19:15.736734] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:37:37.380 [2024-12-09 23:19:15.736741] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:37:37.380 [2024-12-09 23:19:15.736748] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:37:37.380 [2024-12-09 23:19:15.736755] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:37:37.380 [2024-12-09 23:19:15.736763] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:37:37.380 [2024-12-09 23:19:15.736770] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:37:37.380 [2024-12-09 23:19:15.736776] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:37:37.380 [2024-12-09 23:19:15.736783] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:37:37.380 [2024-12-09 23:19:15.736790] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:37:37.380 [2024-12-09 23:19:15.736797] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:37:37.380 [2024-12-09 23:19:15.736803] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:37:37.380 [2024-12-09 23:19:15.736809] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:37:37.380 [2024-12-09 23:19:15.736815] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:37:37.380 [2024-12-09 23:19:15.736823] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:37:37.380 [2024-12-09 23:19:15.736831] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:37:37.380 [2024-12-09 23:19:15.736839] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:37:37.380 [2024-12-09 23:19:15.736849] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:37:37.380 [2024-12-09 23:19:15.736857] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:37:37.380 [2024-12-09 23:19:15.736864] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:37:37.380 [2024-12-09 23:19:15.736871] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:37:37.380 [2024-12-09 23:19:15.736878] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:37:37.380 [2024-12-09 23:19:15.736884] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:37:37.380 [2024-12-09 23:19:15.736891] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:37:37.380 [2024-12-09 23:19:15.736900] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:37:37.380 [2024-12-09 23:19:15.736909] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:37:37.380 [2024-12-09 23:19:15.736921] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:37:37.380 [2024-12-09 23:19:15.736928] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:37:37.380 [2024-12-09 23:19:15.736935] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:37:37.380 [2024-12-09 23:19:15.736943] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:37:37.380 [2024-12-09 23:19:15.736950] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:37:37.381 [2024-12-09 23:19:15.736957] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:37:37.381 [2024-12-09 23:19:15.736964] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:37:37.381 [2024-12-09 23:19:15.736972] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:37:37.381 [2024-12-09 23:19:15.736978] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:37:37.381 [2024-12-09 23:19:15.736986] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:37:37.381 [2024-12-09 23:19:15.736994] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:37:37.381 [2024-12-09 23:19:15.737001] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:37:37.381 [2024-12-09 23:19:15.737008] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:37:37.381 [2024-12-09 23:19:15.737014] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:37:37.381 [2024-12-09 23:19:15.737022] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:37:37.381 [2024-12-09 23:19:15.737031] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:37:37.381 [2024-12-09 23:19:15.737039] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:37:37.381 [2024-12-09 23:19:15.737046] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:37:37.381 [2024-12-09 23:19:15.737053] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:37:37.381 [2024-12-09 23:19:15.737060] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:37:37.381 [2024-12-09 23:19:15.737067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:37.381 [2024-12-09 23:19:15.737075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:37:37.381 [2024-12-09 23:19:15.737084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.651 ms 00:37:37.381 [2024-12-09 23:19:15.737093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:37.381 [2024-12-09 23:19:15.770068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:37.381 [2024-12-09 23:19:15.770126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:37:37.381 [2024-12-09 23:19:15.770138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.928 ms 00:37:37.381 [2024-12-09 23:19:15.770151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:37.381 [2024-12-09 23:19:15.770263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:37.381 [2024-12-09 23:19:15.770273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:37:37.381 [2024-12-09 23:19:15.770283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:37:37.381 [2024-12-09 23:19:15.770291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:37.381 [2024-12-09 23:19:15.817514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:37.381 [2024-12-09 23:19:15.817571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:37:37.381 [2024-12-09 23:19:15.817584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.159 ms 00:37:37.381 [2024-12-09 23:19:15.817593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:37.381 [2024-12-09 23:19:15.817647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:37.381 [2024-12-09 23:19:15.817658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:37:37.381 [2024-12-09 23:19:15.817671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:37:37.381 [2024-12-09 23:19:15.817679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:37.381 [2024-12-09 23:19:15.818336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:37.381 [2024-12-09 23:19:15.818372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:37:37.381 [2024-12-09 23:19:15.818384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.577 ms 00:37:37.381 [2024-12-09 23:19:15.818393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:37.381 [2024-12-09 23:19:15.818560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:37.381 [2024-12-09 23:19:15.818571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:37:37.381 [2024-12-09 23:19:15.818584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.135 ms 00:37:37.381 [2024-12-09 23:19:15.818592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:37.381 [2024-12-09 23:19:15.834692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:37.381 [2024-12-09 23:19:15.834750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:37:37.381 [2024-12-09 23:19:15.834762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.079 ms 00:37:37.381 [2024-12-09 23:19:15.834770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:37.643 [2024-12-09 23:19:15.849401] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:37:37.643 [2024-12-09 23:19:15.849467] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:37:37.643 [2024-12-09 23:19:15.849482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:37.643 [2024-12-09 23:19:15.849492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:37:37.643 [2024-12-09 23:19:15.849502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.599 ms 00:37:37.643 [2024-12-09 23:19:15.849510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:37.643 [2024-12-09 23:19:15.875594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:37.643 [2024-12-09 23:19:15.875645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:37:37.643 [2024-12-09 23:19:15.875659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.027 ms 00:37:37.643 [2024-12-09 23:19:15.875668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:37.643 [2024-12-09 23:19:15.889172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:37.643 [2024-12-09 23:19:15.889233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:37:37.643 [2024-12-09 23:19:15.889246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.458 ms 00:37:37.643 [2024-12-09 23:19:15.889254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:37.643 [2024-12-09 23:19:15.902555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:37.643 [2024-12-09 23:19:15.902605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:37:37.643 [2024-12-09 23:19:15.902617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.250 ms 00:37:37.643 [2024-12-09 23:19:15.902626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:37.643 [2024-12-09 23:19:15.903283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:37.643 [2024-12-09 23:19:15.903318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:37:37.643 [2024-12-09 23:19:15.903332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.542 ms 00:37:37.643 [2024-12-09 23:19:15.903341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:37.643 [2024-12-09 23:19:15.972705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:37.643 [2024-12-09 23:19:15.972782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:37:37.643 [2024-12-09 23:19:15.972808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 69.341 ms 00:37:37.643 [2024-12-09 23:19:15.972818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:37.643 [2024-12-09 23:19:15.984971] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:37:37.643 [2024-12-09 23:19:15.988761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:37.643 [2024-12-09 23:19:15.988818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:37:37.643 [2024-12-09 23:19:15.988833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.872 ms 00:37:37.643 [2024-12-09 23:19:15.988843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:37.643 [2024-12-09 23:19:15.988958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:37.643 [2024-12-09 23:19:15.988970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:37:37.643 [2024-12-09 23:19:15.988984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:37:37.643 [2024-12-09 23:19:15.988992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:37.643 [2024-12-09 23:19:15.989067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:37.643 [2024-12-09 23:19:15.989079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:37:37.643 [2024-12-09 23:19:15.989087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:37:37.643 [2024-12-09 23:19:15.989096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:37.643 [2024-12-09 23:19:15.989116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:37.643 [2024-12-09 23:19:15.989126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:37:37.643 [2024-12-09 23:19:15.989135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:37:37.643 [2024-12-09 23:19:15.989143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:37.643 [2024-12-09 23:19:15.989184] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:37:37.643 [2024-12-09 23:19:15.989194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:37.643 [2024-12-09 23:19:15.989203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:37:37.643 [2024-12-09 23:19:15.989212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:37:37.643 [2024-12-09 23:19:15.989240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:37.643 [2024-12-09 23:19:16.016139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:37.643 [2024-12-09 23:19:16.016199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:37:37.643 [2024-12-09 23:19:16.016229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.878 ms 00:37:37.643 [2024-12-09 23:19:16.016239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:37.643 [2024-12-09 23:19:16.016331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:37.643 [2024-12-09 23:19:16.016342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:37:37.643 [2024-12-09 23:19:16.016352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:37:37.643 [2024-12-09 23:19:16.016361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:37.643 [2024-12-09 23:19:16.017693] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 312.069 ms, result 0 00:37:39.029  [2024-12-09T23:19:18.434Z] Copying: 9408/1048576 [kB] (9408 kBps) [2024-12-09T23:19:19.378Z] Copying: 18964/1048576 [kB] (9556 kBps) [2024-12-09T23:19:20.356Z] Copying: 28652/1048576 [kB] (9688 kBps) [2024-12-09T23:19:21.319Z] Copying: 38/1024 [MB] (10 MBps) [2024-12-09T23:19:22.270Z] Copying: 48/1024 [MB] (10 MBps) [2024-12-09T23:19:23.214Z] Copying: 59680/1048576 [kB] (10224 kBps) [2024-12-09T23:19:24.600Z] Copying: 68/1024 [MB] (10 MBps) [2024-12-09T23:19:25.550Z] Copying: 78/1024 [MB] (10 MBps) [2024-12-09T23:19:26.494Z] Copying: 89/1024 [MB] (10 MBps) [2024-12-09T23:19:27.437Z] Copying: 101284/1048576 [kB] (9876 kBps) [2024-12-09T23:19:28.387Z] Copying: 109/1024 [MB] (10 MBps) [2024-12-09T23:19:29.333Z] Copying: 119/1024 [MB] (10 MBps) [2024-12-09T23:19:30.301Z] Copying: 129/1024 [MB] (10 MBps) [2024-12-09T23:19:31.244Z] Copying: 152/1024 [MB] (23 MBps) [2024-12-09T23:19:32.630Z] Copying: 163/1024 [MB] (10 MBps) [2024-12-09T23:19:33.573Z] Copying: 178/1024 [MB] (14 MBps) [2024-12-09T23:19:34.516Z] Copying: 191/1024 [MB] (13 MBps) [2024-12-09T23:19:35.457Z] Copying: 208/1024 [MB] (17 MBps) [2024-12-09T23:19:36.401Z] Copying: 226/1024 [MB] (17 MBps) [2024-12-09T23:19:37.345Z] Copying: 239/1024 [MB] (12 MBps) [2024-12-09T23:19:38.298Z] Copying: 256/1024 [MB] (16 MBps) [2024-12-09T23:19:39.297Z] Copying: 266/1024 [MB] (10 MBps) [2024-12-09T23:19:40.242Z] Copying: 282092/1048576 [kB] (9332 kBps) [2024-12-09T23:19:41.626Z] Copying: 290/1024 [MB] (15 MBps) [2024-12-09T23:19:42.569Z] Copying: 305/1024 [MB] (14 MBps) [2024-12-09T23:19:43.511Z] Copying: 319/1024 [MB] (13 MBps) [2024-12-09T23:19:44.457Z] Copying: 330/1024 [MB] (11 MBps) [2024-12-09T23:19:45.397Z] Copying: 343/1024 [MB] (13 MBps) [2024-12-09T23:19:46.340Z] Copying: 354/1024 [MB] (10 MBps) [2024-12-09T23:19:47.292Z] Copying: 364/1024 [MB] (10 MBps) [2024-12-09T23:19:48.249Z] Copying: 383336/1048576 [kB] (10168 kBps) [2024-12-09T23:19:49.636Z] Copying: 386/1024 [MB] (11 MBps) [2024-12-09T23:19:50.209Z] Copying: 405152/1048576 [kB] (9848 kBps) [2024-12-09T23:19:51.591Z] Copying: 415368/1048576 [kB] (10216 kBps) [2024-12-09T23:19:52.527Z] Copying: 425424/1048576 [kB] (10056 kBps) [2024-12-09T23:19:53.459Z] Copying: 427/1024 [MB] (12 MBps) [2024-12-09T23:19:54.397Z] Copying: 441/1024 [MB] (13 MBps) [2024-12-09T23:19:55.340Z] Copying: 459/1024 [MB] (17 MBps) [2024-12-09T23:19:56.278Z] Copying: 469/1024 [MB] (10 MBps) [2024-12-09T23:19:57.219Z] Copying: 481/1024 [MB] (11 MBps) [2024-12-09T23:19:58.604Z] Copying: 492/1024 [MB] (11 MBps) [2024-12-09T23:19:59.548Z] Copying: 502/1024 [MB] (10 MBps) [2024-12-09T23:20:00.488Z] Copying: 512/1024 [MB] (10 MBps) [2024-12-09T23:20:01.431Z] Copying: 523/1024 [MB] (10 MBps) [2024-12-09T23:20:02.369Z] Copying: 534/1024 [MB] (11 MBps) [2024-12-09T23:20:03.312Z] Copying: 546/1024 [MB] (12 MBps) [2024-12-09T23:20:04.261Z] Copying: 557/1024 [MB] (10 MBps) [2024-12-09T23:20:05.205Z] Copying: 568/1024 [MB] (10 MBps) [2024-12-09T23:20:06.592Z] Copying: 578/1024 [MB] (10 MBps) [2024-12-09T23:20:07.537Z] Copying: 589/1024 [MB] (10 MBps) [2024-12-09T23:20:08.478Z] Copying: 600/1024 [MB] (10 MBps) [2024-12-09T23:20:09.436Z] Copying: 612/1024 [MB] (12 MBps) [2024-12-09T23:20:10.379Z] Copying: 623/1024 [MB] (10 MBps) [2024-12-09T23:20:11.316Z] Copying: 634/1024 [MB] (10 MBps) [2024-12-09T23:20:12.251Z] Copying: 646/1024 [MB] (12 MBps) [2024-12-09T23:20:13.625Z] Copying: 659/1024 [MB] (13 MBps) [2024-12-09T23:20:14.558Z] Copying: 670/1024 [MB] (10 MBps) [2024-12-09T23:20:15.532Z] Copying: 681/1024 [MB] (10 MBps) [2024-12-09T23:20:16.468Z] Copying: 692/1024 [MB] (11 MBps) [2024-12-09T23:20:17.412Z] Copying: 705/1024 [MB] (12 MBps) [2024-12-09T23:20:18.355Z] Copying: 715/1024 [MB] (10 MBps) [2024-12-09T23:20:19.303Z] Copying: 742816/1048576 [kB] (9864 kBps) [2024-12-09T23:20:20.245Z] Copying: 739/1024 [MB] (14 MBps) [2024-12-09T23:20:21.637Z] Copying: 750/1024 [MB] (10 MBps) [2024-12-09T23:20:22.210Z] Copying: 762/1024 [MB] (11 MBps) [2024-12-09T23:20:23.595Z] Copying: 773/1024 [MB] (10 MBps) [2024-12-09T23:20:24.530Z] Copying: 784/1024 [MB] (11 MBps) [2024-12-09T23:20:25.472Z] Copying: 797/1024 [MB] (12 MBps) [2024-12-09T23:20:26.406Z] Copying: 808/1024 [MB] (11 MBps) [2024-12-09T23:20:27.341Z] Copying: 820/1024 [MB] (11 MBps) [2024-12-09T23:20:28.287Z] Copying: 831/1024 [MB] (11 MBps) [2024-12-09T23:20:29.332Z] Copying: 843/1024 [MB] (11 MBps) [2024-12-09T23:20:30.278Z] Copying: 854/1024 [MB] (10 MBps) [2024-12-09T23:20:31.232Z] Copying: 864/1024 [MB] (10 MBps) [2024-12-09T23:20:32.624Z] Copying: 875/1024 [MB] (10 MBps) [2024-12-09T23:20:33.567Z] Copying: 886/1024 [MB] (10 MBps) [2024-12-09T23:20:34.519Z] Copying: 896/1024 [MB] (10 MBps) [2024-12-09T23:20:35.463Z] Copying: 907/1024 [MB] (10 MBps) [2024-12-09T23:20:36.408Z] Copying: 917/1024 [MB] (10 MBps) [2024-12-09T23:20:37.353Z] Copying: 928/1024 [MB] (10 MBps) [2024-12-09T23:20:38.298Z] Copying: 938/1024 [MB] (10 MBps) [2024-12-09T23:20:39.242Z] Copying: 948/1024 [MB] (10 MBps) [2024-12-09T23:20:40.634Z] Copying: 958/1024 [MB] (10 MBps) [2024-12-09T23:20:41.211Z] Copying: 971/1024 [MB] (12 MBps) [2024-12-09T23:20:42.639Z] Copying: 982/1024 [MB] (10 MBps) [2024-12-09T23:20:43.218Z] Copying: 992/1024 [MB] (10 MBps) [2024-12-09T23:20:44.601Z] Copying: 1003/1024 [MB] (10 MBps) [2024-12-09T23:20:45.176Z] Copying: 1016/1024 [MB] (12 MBps) [2024-12-09T23:20:45.437Z] Copying: 1024/1024 [MB] (average 11 MBps)[2024-12-09 23:20:45.339721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:06.975 [2024-12-09 23:20:45.339810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:39:06.975 [2024-12-09 23:20:45.339827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:39:06.975 [2024-12-09 23:20:45.339836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:06.975 [2024-12-09 23:20:45.339862] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:39:06.975 [2024-12-09 23:20:45.343575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:06.975 [2024-12-09 23:20:45.343637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:39:06.975 [2024-12-09 23:20:45.343654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.695 ms 00:39:06.975 [2024-12-09 23:20:45.343666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:06.975 [2024-12-09 23:20:45.343997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:06.975 [2024-12-09 23:20:45.344013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:39:06.975 [2024-12-09 23:20:45.344026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.293 ms 00:39:06.975 [2024-12-09 23:20:45.344036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:06.975 [2024-12-09 23:20:45.349026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:06.975 [2024-12-09 23:20:45.349061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:39:06.975 [2024-12-09 23:20:45.349076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.970 ms 00:39:06.975 [2024-12-09 23:20:45.349094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:06.975 [2024-12-09 23:20:45.356761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:06.975 [2024-12-09 23:20:45.356811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:39:06.975 [2024-12-09 23:20:45.356826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.643 ms 00:39:06.975 [2024-12-09 23:20:45.356835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:06.975 [2024-12-09 23:20:45.384165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:06.975 [2024-12-09 23:20:45.384228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:39:06.975 [2024-12-09 23:20:45.384242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.257 ms 00:39:06.975 [2024-12-09 23:20:45.384250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:06.975 [2024-12-09 23:20:45.400560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:06.975 [2024-12-09 23:20:45.400610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:39:06.975 [2024-12-09 23:20:45.400624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.257 ms 00:39:06.975 [2024-12-09 23:20:45.400633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:06.975 [2024-12-09 23:20:45.400775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:06.975 [2024-12-09 23:20:45.400787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:39:06.975 [2024-12-09 23:20:45.400797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:39:06.975 [2024-12-09 23:20:45.400805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:06.975 [2024-12-09 23:20:45.427092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:06.975 [2024-12-09 23:20:45.427142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:39:06.975 [2024-12-09 23:20:45.427154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.269 ms 00:39:06.975 [2024-12-09 23:20:45.427162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:07.238 [2024-12-09 23:20:45.452713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:07.238 [2024-12-09 23:20:45.452756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:39:07.238 [2024-12-09 23:20:45.452768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.503 ms 00:39:07.238 [2024-12-09 23:20:45.452776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:07.238 [2024-12-09 23:20:45.477937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:07.238 [2024-12-09 23:20:45.477988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:39:07.238 [2024-12-09 23:20:45.478000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.115 ms 00:39:07.238 [2024-12-09 23:20:45.478007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:07.238 [2024-12-09 23:20:45.502883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:07.238 [2024-12-09 23:20:45.502930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:39:07.238 [2024-12-09 23:20:45.502943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.800 ms 00:39:07.238 [2024-12-09 23:20:45.502951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:07.238 [2024-12-09 23:20:45.502997] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:39:07.238 [2024-12-09 23:20:45.503021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:39:07.238 [2024-12-09 23:20:45.503036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:39:07.238 [2024-12-09 23:20:45.503045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:39:07.238 [2024-12-09 23:20:45.503053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:39:07.239 [2024-12-09 23:20:45.503061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:39:07.239 [2024-12-09 23:20:45.503070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:39:07.239 [2024-12-09 23:20:45.503078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:39:07.239 [2024-12-09 23:20:45.503086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:39:07.239 [2024-12-09 23:20:45.503094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:39:07.239 [2024-12-09 23:20:45.503102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:39:07.239 [2024-12-09 23:20:45.503109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:39:07.239 [2024-12-09 23:20:45.503116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:39:07.239 [2024-12-09 23:20:45.503124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:39:07.239 [2024-12-09 23:20:45.503131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:39:07.239 [2024-12-09 23:20:45.503139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:39:07.239 [2024-12-09 23:20:45.503147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:39:07.239 [2024-12-09 23:20:45.503155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:39:07.239 [2024-12-09 23:20:45.503163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:39:07.239 [2024-12-09 23:20:45.503173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:39:07.239 [2024-12-09 23:20:45.503180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:39:07.239 [2024-12-09 23:20:45.503188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:39:07.239 [2024-12-09 23:20:45.503195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:39:07.239 [2024-12-09 23:20:45.503202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:39:07.239 [2024-12-09 23:20:45.503210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:39:07.239 [2024-12-09 23:20:45.503238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:39:07.239 [2024-12-09 23:20:45.503247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:39:07.239 [2024-12-09 23:20:45.503255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:39:07.239 [2024-12-09 23:20:45.503263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:39:07.239 [2024-12-09 23:20:45.503271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:39:07.239 [2024-12-09 23:20:45.503279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:39:07.239 [2024-12-09 23:20:45.503287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:39:07.239 [2024-12-09 23:20:45.503296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:39:07.239 [2024-12-09 23:20:45.503303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:39:07.239 [2024-12-09 23:20:45.503312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:39:07.239 [2024-12-09 23:20:45.503320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:39:07.239 [2024-12-09 23:20:45.503327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:39:07.239 [2024-12-09 23:20:45.503335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:39:07.239 [2024-12-09 23:20:45.503344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:39:07.239 [2024-12-09 23:20:45.503352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:39:07.239 [2024-12-09 23:20:45.503360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:39:07.239 [2024-12-09 23:20:45.503367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:39:07.239 [2024-12-09 23:20:45.503375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:39:07.239 [2024-12-09 23:20:45.503384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:39:07.239 [2024-12-09 23:20:45.503392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:39:07.239 [2024-12-09 23:20:45.503400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:39:07.239 [2024-12-09 23:20:45.503408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:39:07.239 [2024-12-09 23:20:45.503415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:39:07.239 [2024-12-09 23:20:45.503424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:39:07.239 [2024-12-09 23:20:45.503432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:39:07.239 [2024-12-09 23:20:45.503440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:39:07.239 [2024-12-09 23:20:45.503449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:39:07.239 [2024-12-09 23:20:45.503457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:39:07.239 [2024-12-09 23:20:45.503465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:39:07.239 [2024-12-09 23:20:45.503472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:39:07.239 [2024-12-09 23:20:45.503479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:39:07.239 [2024-12-09 23:20:45.503486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:39:07.239 [2024-12-09 23:20:45.503494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:39:07.239 [2024-12-09 23:20:45.503501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:39:07.239 [2024-12-09 23:20:45.503509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:39:07.239 [2024-12-09 23:20:45.503516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:39:07.239 [2024-12-09 23:20:45.503524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:39:07.239 [2024-12-09 23:20:45.503532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:39:07.239 [2024-12-09 23:20:45.503539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:39:07.239 [2024-12-09 23:20:45.503547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:39:07.239 [2024-12-09 23:20:45.503554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:39:07.239 [2024-12-09 23:20:45.503562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:39:07.239 [2024-12-09 23:20:45.503570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:39:07.239 [2024-12-09 23:20:45.503578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:39:07.239 [2024-12-09 23:20:45.503585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:39:07.239 [2024-12-09 23:20:45.503593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:39:07.239 [2024-12-09 23:20:45.503601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:39:07.239 [2024-12-09 23:20:45.503609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:39:07.239 [2024-12-09 23:20:45.503616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:39:07.239 [2024-12-09 23:20:45.503624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:39:07.239 [2024-12-09 23:20:45.503631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:39:07.239 [2024-12-09 23:20:45.503639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:39:07.239 [2024-12-09 23:20:45.503647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:39:07.239 [2024-12-09 23:20:45.503654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:39:07.239 [2024-12-09 23:20:45.503661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:39:07.239 [2024-12-09 23:20:45.503669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:39:07.239 [2024-12-09 23:20:45.503676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:39:07.239 [2024-12-09 23:20:45.503684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:39:07.239 [2024-12-09 23:20:45.503692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:39:07.239 [2024-12-09 23:20:45.503700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:39:07.239 [2024-12-09 23:20:45.503708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:39:07.239 [2024-12-09 23:20:45.503716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:39:07.239 [2024-12-09 23:20:45.503723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:39:07.240 [2024-12-09 23:20:45.503731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:39:07.240 [2024-12-09 23:20:45.503738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:39:07.240 [2024-12-09 23:20:45.503746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:39:07.240 [2024-12-09 23:20:45.503754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:39:07.240 [2024-12-09 23:20:45.503762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:39:07.240 [2024-12-09 23:20:45.503769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:39:07.240 [2024-12-09 23:20:45.503779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:39:07.240 [2024-12-09 23:20:45.503787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:39:07.240 [2024-12-09 23:20:45.503795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:39:07.240 [2024-12-09 23:20:45.503802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:39:07.240 [2024-12-09 23:20:45.503809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:39:07.240 [2024-12-09 23:20:45.503816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:39:07.240 [2024-12-09 23:20:45.503824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:39:07.240 [2024-12-09 23:20:45.503840] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:39:07.240 [2024-12-09 23:20:45.503849] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 37f650ee-1fad-44b0-9ad5-d0d80f0dde74 00:39:07.240 [2024-12-09 23:20:45.503856] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:39:07.240 [2024-12-09 23:20:45.503863] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:39:07.240 [2024-12-09 23:20:45.503870] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:39:07.240 [2024-12-09 23:20:45.503878] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:39:07.240 [2024-12-09 23:20:45.503893] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:39:07.240 [2024-12-09 23:20:45.503901] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:39:07.240 [2024-12-09 23:20:45.503909] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:39:07.240 [2024-12-09 23:20:45.503915] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:39:07.240 [2024-12-09 23:20:45.503922] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:39:07.240 [2024-12-09 23:20:45.503929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:07.240 [2024-12-09 23:20:45.503937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:39:07.240 [2024-12-09 23:20:45.503947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.934 ms 00:39:07.240 [2024-12-09 23:20:45.503958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:07.240 [2024-12-09 23:20:45.517526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:07.240 [2024-12-09 23:20:45.517570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:39:07.240 [2024-12-09 23:20:45.517583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.547 ms 00:39:07.240 [2024-12-09 23:20:45.517591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:07.240 [2024-12-09 23:20:45.517989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:07.240 [2024-12-09 23:20:45.518006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:39:07.240 [2024-12-09 23:20:45.518024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.359 ms 00:39:07.240 [2024-12-09 23:20:45.518033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:07.240 [2024-12-09 23:20:45.554819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:07.240 [2024-12-09 23:20:45.554871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:39:07.240 [2024-12-09 23:20:45.554883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:07.240 [2024-12-09 23:20:45.554892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:07.240 [2024-12-09 23:20:45.554957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:07.240 [2024-12-09 23:20:45.554966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:39:07.240 [2024-12-09 23:20:45.554981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:07.240 [2024-12-09 23:20:45.554989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:07.240 [2024-12-09 23:20:45.555072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:07.240 [2024-12-09 23:20:45.555084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:39:07.240 [2024-12-09 23:20:45.555092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:07.240 [2024-12-09 23:20:45.555100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:07.240 [2024-12-09 23:20:45.555116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:07.240 [2024-12-09 23:20:45.555124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:39:07.240 [2024-12-09 23:20:45.555133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:07.240 [2024-12-09 23:20:45.555144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:07.240 [2024-12-09 23:20:45.639847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:07.240 [2024-12-09 23:20:45.639906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:39:07.240 [2024-12-09 23:20:45.639921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:07.240 [2024-12-09 23:20:45.639930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:07.502 [2024-12-09 23:20:45.712390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:07.502 [2024-12-09 23:20:45.712457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:39:07.502 [2024-12-09 23:20:45.712477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:07.502 [2024-12-09 23:20:45.712486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:07.502 [2024-12-09 23:20:45.712553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:07.502 [2024-12-09 23:20:45.712563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:39:07.502 [2024-12-09 23:20:45.712572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:07.502 [2024-12-09 23:20:45.712581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:07.502 [2024-12-09 23:20:45.712638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:07.502 [2024-12-09 23:20:45.712648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:39:07.502 [2024-12-09 23:20:45.712658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:07.502 [2024-12-09 23:20:45.712666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:07.502 [2024-12-09 23:20:45.712770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:07.502 [2024-12-09 23:20:45.712782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:39:07.502 [2024-12-09 23:20:45.712791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:07.502 [2024-12-09 23:20:45.712798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:07.502 [2024-12-09 23:20:45.712830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:07.502 [2024-12-09 23:20:45.712840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:39:07.502 [2024-12-09 23:20:45.712848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:07.502 [2024-12-09 23:20:45.712856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:07.502 [2024-12-09 23:20:45.712902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:07.502 [2024-12-09 23:20:45.712912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:39:07.502 [2024-12-09 23:20:45.712920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:07.502 [2024-12-09 23:20:45.712929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:07.502 [2024-12-09 23:20:45.712974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:07.502 [2024-12-09 23:20:45.712984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:39:07.502 [2024-12-09 23:20:45.712993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:07.502 [2024-12-09 23:20:45.713001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:07.502 [2024-12-09 23:20:45.713139] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 373.386 ms, result 0 00:39:08.075 00:39:08.075 00:39:08.075 23:20:46 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:39:10.618 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:39:10.618 23:20:48 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:39:10.618 [2024-12-09 23:20:48.809143] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:39:10.618 [2024-12-09 23:20:48.809362] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79214 ] 00:39:10.618 [2024-12-09 23:20:48.974493] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:10.880 [2024-12-09 23:20:49.104507] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:11.143 [2024-12-09 23:20:49.401164] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:39:11.143 [2024-12-09 23:20:49.401269] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:39:11.144 [2024-12-09 23:20:49.564437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:11.144 [2024-12-09 23:20:49.564501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:39:11.144 [2024-12-09 23:20:49.564518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:39:11.144 [2024-12-09 23:20:49.564526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:11.144 [2024-12-09 23:20:49.564583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:11.144 [2024-12-09 23:20:49.564596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:39:11.144 [2024-12-09 23:20:49.564606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:39:11.144 [2024-12-09 23:20:49.564614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:11.144 [2024-12-09 23:20:49.564635] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:39:11.144 [2024-12-09 23:20:49.565347] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:39:11.144 [2024-12-09 23:20:49.565376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:11.144 [2024-12-09 23:20:49.565385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:39:11.144 [2024-12-09 23:20:49.565395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.747 ms 00:39:11.144 [2024-12-09 23:20:49.565414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:11.144 [2024-12-09 23:20:49.567103] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:39:11.144 [2024-12-09 23:20:49.581536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:11.144 [2024-12-09 23:20:49.581589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:39:11.144 [2024-12-09 23:20:49.581603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.435 ms 00:39:11.144 [2024-12-09 23:20:49.581612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:11.144 [2024-12-09 23:20:49.581699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:11.144 [2024-12-09 23:20:49.581710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:39:11.144 [2024-12-09 23:20:49.581720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:39:11.144 [2024-12-09 23:20:49.581728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:11.144 [2024-12-09 23:20:49.590001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:11.144 [2024-12-09 23:20:49.590050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:39:11.144 [2024-12-09 23:20:49.590062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.193 ms 00:39:11.144 [2024-12-09 23:20:49.590077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:11.144 [2024-12-09 23:20:49.590155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:11.144 [2024-12-09 23:20:49.590165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:39:11.144 [2024-12-09 23:20:49.590174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:39:11.144 [2024-12-09 23:20:49.590182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:11.144 [2024-12-09 23:20:49.590245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:11.144 [2024-12-09 23:20:49.590256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:39:11.144 [2024-12-09 23:20:49.590265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:39:11.144 [2024-12-09 23:20:49.590273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:11.144 [2024-12-09 23:20:49.590299] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:39:11.144 [2024-12-09 23:20:49.594329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:11.144 [2024-12-09 23:20:49.594369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:39:11.144 [2024-12-09 23:20:49.594383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.035 ms 00:39:11.144 [2024-12-09 23:20:49.594392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:11.144 [2024-12-09 23:20:49.594430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:11.144 [2024-12-09 23:20:49.594440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:39:11.144 [2024-12-09 23:20:49.594449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:39:11.144 [2024-12-09 23:20:49.594456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:11.144 [2024-12-09 23:20:49.594509] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:39:11.144 [2024-12-09 23:20:49.594532] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:39:11.144 [2024-12-09 23:20:49.594570] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:39:11.144 [2024-12-09 23:20:49.594589] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:39:11.144 [2024-12-09 23:20:49.594696] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:39:11.144 [2024-12-09 23:20:49.594707] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:39:11.144 [2024-12-09 23:20:49.594718] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:39:11.144 [2024-12-09 23:20:49.594729] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:39:11.144 [2024-12-09 23:20:49.594739] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:39:11.144 [2024-12-09 23:20:49.594748] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:39:11.144 [2024-12-09 23:20:49.594756] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:39:11.144 [2024-12-09 23:20:49.594767] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:39:11.144 [2024-12-09 23:20:49.594775] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:39:11.144 [2024-12-09 23:20:49.594783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:11.144 [2024-12-09 23:20:49.594790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:39:11.144 [2024-12-09 23:20:49.594798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.278 ms 00:39:11.144 [2024-12-09 23:20:49.594806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:11.144 [2024-12-09 23:20:49.594889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:11.144 [2024-12-09 23:20:49.594898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:39:11.144 [2024-12-09 23:20:49.594906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:39:11.144 [2024-12-09 23:20:49.594913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:11.144 [2024-12-09 23:20:49.595015] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:39:11.144 [2024-12-09 23:20:49.595026] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:39:11.144 [2024-12-09 23:20:49.595034] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:39:11.144 [2024-12-09 23:20:49.595043] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:11.144 [2024-12-09 23:20:49.595051] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:39:11.144 [2024-12-09 23:20:49.595059] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:39:11.144 [2024-12-09 23:20:49.595067] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:39:11.144 [2024-12-09 23:20:49.595074] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:39:11.144 [2024-12-09 23:20:49.595081] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:39:11.144 [2024-12-09 23:20:49.595088] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:39:11.144 [2024-12-09 23:20:49.595095] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:39:11.144 [2024-12-09 23:20:49.595104] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:39:11.145 [2024-12-09 23:20:49.595111] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:39:11.145 [2024-12-09 23:20:49.595127] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:39:11.145 [2024-12-09 23:20:49.595134] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:39:11.145 [2024-12-09 23:20:49.595141] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:11.145 [2024-12-09 23:20:49.595149] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:39:11.145 [2024-12-09 23:20:49.595157] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:39:11.145 [2024-12-09 23:20:49.595164] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:11.145 [2024-12-09 23:20:49.595172] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:39:11.145 [2024-12-09 23:20:49.595179] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:39:11.145 [2024-12-09 23:20:49.595186] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:39:11.145 [2024-12-09 23:20:49.595193] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:39:11.145 [2024-12-09 23:20:49.595200] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:39:11.145 [2024-12-09 23:20:49.595206] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:39:11.145 [2024-12-09 23:20:49.595213] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:39:11.145 [2024-12-09 23:20:49.595253] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:39:11.145 [2024-12-09 23:20:49.595261] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:39:11.145 [2024-12-09 23:20:49.595269] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:39:11.145 [2024-12-09 23:20:49.595276] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:39:11.145 [2024-12-09 23:20:49.595284] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:39:11.145 [2024-12-09 23:20:49.595290] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:39:11.145 [2024-12-09 23:20:49.595297] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:39:11.145 [2024-12-09 23:20:49.595304] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:39:11.145 [2024-12-09 23:20:49.595311] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:39:11.145 [2024-12-09 23:20:49.595318] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:39:11.145 [2024-12-09 23:20:49.595325] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:39:11.145 [2024-12-09 23:20:49.595332] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:39:11.145 [2024-12-09 23:20:49.595339] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:39:11.145 [2024-12-09 23:20:49.595346] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:11.145 [2024-12-09 23:20:49.595353] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:39:11.145 [2024-12-09 23:20:49.595360] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:39:11.145 [2024-12-09 23:20:49.595366] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:11.145 [2024-12-09 23:20:49.595375] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:39:11.145 [2024-12-09 23:20:49.595383] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:39:11.145 [2024-12-09 23:20:49.595391] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:39:11.145 [2024-12-09 23:20:49.595399] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:11.145 [2024-12-09 23:20:49.595407] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:39:11.145 [2024-12-09 23:20:49.595414] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:39:11.145 [2024-12-09 23:20:49.595421] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:39:11.145 [2024-12-09 23:20:49.595428] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:39:11.145 [2024-12-09 23:20:49.595435] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:39:11.145 [2024-12-09 23:20:49.595441] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:39:11.145 [2024-12-09 23:20:49.595450] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:39:11.145 [2024-12-09 23:20:49.595459] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:39:11.145 [2024-12-09 23:20:49.595470] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:39:11.145 [2024-12-09 23:20:49.595479] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:39:11.145 [2024-12-09 23:20:49.595486] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:39:11.145 [2024-12-09 23:20:49.595493] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:39:11.145 [2024-12-09 23:20:49.595500] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:39:11.145 [2024-12-09 23:20:49.595507] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:39:11.145 [2024-12-09 23:20:49.595515] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:39:11.145 [2024-12-09 23:20:49.595522] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:39:11.145 [2024-12-09 23:20:49.595529] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:39:11.145 [2024-12-09 23:20:49.595536] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:39:11.145 [2024-12-09 23:20:49.595543] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:39:11.145 [2024-12-09 23:20:49.595551] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:39:11.145 [2024-12-09 23:20:49.595558] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:39:11.145 [2024-12-09 23:20:49.595565] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:39:11.145 [2024-12-09 23:20:49.595572] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:39:11.145 [2024-12-09 23:20:49.595581] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:39:11.145 [2024-12-09 23:20:49.595589] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:39:11.145 [2024-12-09 23:20:49.595596] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:39:11.145 [2024-12-09 23:20:49.595604] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:39:11.145 [2024-12-09 23:20:49.595612] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:39:11.145 [2024-12-09 23:20:49.595621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:11.145 [2024-12-09 23:20:49.595628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:39:11.145 [2024-12-09 23:20:49.595636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.675 ms 00:39:11.145 [2024-12-09 23:20:49.595644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:11.415 [2024-12-09 23:20:49.627773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:11.415 [2024-12-09 23:20:49.627829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:39:11.415 [2024-12-09 23:20:49.627840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.084 ms 00:39:11.415 [2024-12-09 23:20:49.627852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:11.415 [2024-12-09 23:20:49.627942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:11.415 [2024-12-09 23:20:49.627951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:39:11.415 [2024-12-09 23:20:49.627960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:39:11.415 [2024-12-09 23:20:49.627968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:11.415 [2024-12-09 23:20:49.678074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:11.415 [2024-12-09 23:20:49.678134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:39:11.415 [2024-12-09 23:20:49.678148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.042 ms 00:39:11.416 [2024-12-09 23:20:49.678157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:11.416 [2024-12-09 23:20:49.678210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:11.416 [2024-12-09 23:20:49.678240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:39:11.416 [2024-12-09 23:20:49.678254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:39:11.416 [2024-12-09 23:20:49.678262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:11.416 [2024-12-09 23:20:49.678888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:11.416 [2024-12-09 23:20:49.678926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:39:11.416 [2024-12-09 23:20:49.678938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.545 ms 00:39:11.416 [2024-12-09 23:20:49.678946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:11.416 [2024-12-09 23:20:49.679105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:11.416 [2024-12-09 23:20:49.679117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:39:11.416 [2024-12-09 23:20:49.679129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.128 ms 00:39:11.416 [2024-12-09 23:20:49.679137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:11.416 [2024-12-09 23:20:49.694929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:11.416 [2024-12-09 23:20:49.694982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:39:11.416 [2024-12-09 23:20:49.694994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.770 ms 00:39:11.416 [2024-12-09 23:20:49.695001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:11.416 [2024-12-09 23:20:49.709300] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:39:11.416 [2024-12-09 23:20:49.709353] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:39:11.416 [2024-12-09 23:20:49.709367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:11.416 [2024-12-09 23:20:49.709376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:39:11.416 [2024-12-09 23:20:49.709386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.252 ms 00:39:11.416 [2024-12-09 23:20:49.709393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:11.416 [2024-12-09 23:20:49.740472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:11.416 [2024-12-09 23:20:49.740525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:39:11.416 [2024-12-09 23:20:49.740539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.012 ms 00:39:11.416 [2024-12-09 23:20:49.740548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:11.416 [2024-12-09 23:20:49.753895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:11.416 [2024-12-09 23:20:49.753944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:39:11.416 [2024-12-09 23:20:49.753956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.279 ms 00:39:11.416 [2024-12-09 23:20:49.753964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:11.416 [2024-12-09 23:20:49.766699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:11.416 [2024-12-09 23:20:49.766750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:39:11.416 [2024-12-09 23:20:49.766763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.686 ms 00:39:11.416 [2024-12-09 23:20:49.766770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:11.416 [2024-12-09 23:20:49.767443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:11.416 [2024-12-09 23:20:49.767482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:39:11.417 [2024-12-09 23:20:49.767497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.560 ms 00:39:11.417 [2024-12-09 23:20:49.767505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:11.417 [2024-12-09 23:20:49.834969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:11.417 [2024-12-09 23:20:49.835039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:39:11.417 [2024-12-09 23:20:49.835064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 67.442 ms 00:39:11.417 [2024-12-09 23:20:49.835074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:11.417 [2024-12-09 23:20:49.847751] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:39:11.417 [2024-12-09 23:20:49.851352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:11.417 [2024-12-09 23:20:49.851403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:39:11.417 [2024-12-09 23:20:49.851416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.215 ms 00:39:11.417 [2024-12-09 23:20:49.851426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:11.417 [2024-12-09 23:20:49.851533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:11.417 [2024-12-09 23:20:49.851544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:39:11.417 [2024-12-09 23:20:49.851559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:39:11.417 [2024-12-09 23:20:49.851567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:11.417 [2024-12-09 23:20:49.851642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:11.417 [2024-12-09 23:20:49.851654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:39:11.417 [2024-12-09 23:20:49.851662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:39:11.417 [2024-12-09 23:20:49.851670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:11.417 [2024-12-09 23:20:49.851692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:11.417 [2024-12-09 23:20:49.851701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:39:11.417 [2024-12-09 23:20:49.851710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:39:11.417 [2024-12-09 23:20:49.851718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:11.417 [2024-12-09 23:20:49.851760] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:39:11.417 [2024-12-09 23:20:49.851771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:11.417 [2024-12-09 23:20:49.851779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:39:11.417 [2024-12-09 23:20:49.851787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:39:11.417 [2024-12-09 23:20:49.851795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:11.683 [2024-12-09 23:20:49.878058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:11.683 [2024-12-09 23:20:49.878112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:39:11.683 [2024-12-09 23:20:49.878133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.243 ms 00:39:11.683 [2024-12-09 23:20:49.878141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:11.683 [2024-12-09 23:20:49.878243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:11.683 [2024-12-09 23:20:49.878255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:39:11.683 [2024-12-09 23:20:49.878265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:39:11.683 [2024-12-09 23:20:49.878274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:11.683 [2024-12-09 23:20:49.879570] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 314.611 ms, result 0 00:39:12.628  [2024-12-09T23:20:52.122Z] Copying: 14/1024 [MB] (14 MBps) [2024-12-09T23:20:53.069Z] Copying: 39/1024 [MB] (24 MBps) [2024-12-09T23:20:54.010Z] Copying: 60/1024 [MB] (20 MBps) [2024-12-09T23:20:54.952Z] Copying: 73/1024 [MB] (13 MBps) [2024-12-09T23:20:55.923Z] Copying: 85/1024 [MB] (11 MBps) [2024-12-09T23:20:57.323Z] Copying: 99/1024 [MB] (13 MBps) [2024-12-09T23:20:57.896Z] Copying: 109/1024 [MB] (10 MBps) [2024-12-09T23:20:59.286Z] Copying: 122/1024 [MB] (12 MBps) [2024-12-09T23:21:00.232Z] Copying: 134/1024 [MB] (11 MBps) [2024-12-09T23:21:01.177Z] Copying: 147/1024 [MB] (12 MBps) [2024-12-09T23:21:02.119Z] Copying: 166/1024 [MB] (18 MBps) [2024-12-09T23:21:03.065Z] Copying: 191/1024 [MB] (25 MBps) [2024-12-09T23:21:04.009Z] Copying: 211/1024 [MB] (20 MBps) [2024-12-09T23:21:04.955Z] Copying: 226/1024 [MB] (14 MBps) [2024-12-09T23:21:05.922Z] Copying: 238/1024 [MB] (12 MBps) [2024-12-09T23:21:07.309Z] Copying: 253/1024 [MB] (14 MBps) [2024-12-09T23:21:07.903Z] Copying: 263/1024 [MB] (10 MBps) [2024-12-09T23:21:09.297Z] Copying: 279656/1048576 [kB] (9704 kBps) [2024-12-09T23:21:10.246Z] Copying: 289408/1048576 [kB] (9752 kBps) [2024-12-09T23:21:11.190Z] Copying: 292/1024 [MB] (10 MBps) [2024-12-09T23:21:12.132Z] Copying: 309640/1048576 [kB] (9752 kBps) [2024-12-09T23:21:13.076Z] Copying: 319104/1048576 [kB] (9464 kBps) [2024-12-09T23:21:14.025Z] Copying: 321/1024 [MB] (10 MBps) [2024-12-09T23:21:14.965Z] Copying: 331/1024 [MB] (10 MBps) [2024-12-09T23:21:15.906Z] Copying: 343/1024 [MB] (11 MBps) [2024-12-09T23:21:17.310Z] Copying: 359/1024 [MB] (16 MBps) [2024-12-09T23:21:18.254Z] Copying: 372/1024 [MB] (12 MBps) [2024-12-09T23:21:19.213Z] Copying: 382/1024 [MB] (10 MBps) [2024-12-09T23:21:20.187Z] Copying: 393/1024 [MB] (11 MBps) [2024-12-09T23:21:21.132Z] Copying: 406/1024 [MB] (12 MBps) [2024-12-09T23:21:22.076Z] Copying: 445/1024 [MB] (39 MBps) [2024-12-09T23:21:23.017Z] Copying: 487/1024 [MB] (42 MBps) [2024-12-09T23:21:23.960Z] Copying: 531/1024 [MB] (43 MBps) [2024-12-09T23:21:24.902Z] Copying: 575/1024 [MB] (43 MBps) [2024-12-09T23:21:26.288Z] Copying: 623/1024 [MB] (48 MBps) [2024-12-09T23:21:27.228Z] Copying: 668/1024 [MB] (44 MBps) [2024-12-09T23:21:28.167Z] Copying: 711/1024 [MB] (43 MBps) [2024-12-09T23:21:29.107Z] Copying: 753/1024 [MB] (42 MBps) [2024-12-09T23:21:30.049Z] Copying: 770/1024 [MB] (16 MBps) [2024-12-09T23:21:30.992Z] Copying: 785/1024 [MB] (14 MBps) [2024-12-09T23:21:31.934Z] Copying: 800/1024 [MB] (14 MBps) [2024-12-09T23:21:32.911Z] Copying: 835/1024 [MB] (35 MBps) [2024-12-09T23:21:34.295Z] Copying: 886/1024 [MB] (50 MBps) [2024-12-09T23:21:35.233Z] Copying: 906/1024 [MB] (20 MBps) [2024-12-09T23:21:36.165Z] Copying: 931/1024 [MB] (24 MBps) [2024-12-09T23:21:37.099Z] Copying: 955/1024 [MB] (24 MBps) [2024-12-09T23:21:38.033Z] Copying: 999/1024 [MB] (44 MBps) [2024-12-09T23:21:38.599Z] Copying: 1023/1024 [MB] (23 MBps) [2024-12-09T23:21:38.599Z] Copying: 1024/1024 [MB] (average 21 MBps)[2024-12-09 23:21:38.396253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:00.137 [2024-12-09 23:21:38.396316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:40:00.137 [2024-12-09 23:21:38.396339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:40:00.137 [2024-12-09 23:21:38.396348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:00.137 [2024-12-09 23:21:38.399177] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:40:00.137 [2024-12-09 23:21:38.404964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:00.137 [2024-12-09 23:21:38.404998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:40:00.137 [2024-12-09 23:21:38.405010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.737 ms 00:40:00.137 [2024-12-09 23:21:38.405018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:00.137 [2024-12-09 23:21:38.415233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:00.137 [2024-12-09 23:21:38.415266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:40:00.137 [2024-12-09 23:21:38.415277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.276 ms 00:40:00.137 [2024-12-09 23:21:38.415290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:00.137 [2024-12-09 23:21:38.432972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:00.137 [2024-12-09 23:21:38.433004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:40:00.137 [2024-12-09 23:21:38.433014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.664 ms 00:40:00.137 [2024-12-09 23:21:38.433023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:00.137 [2024-12-09 23:21:38.439134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:00.137 [2024-12-09 23:21:38.439162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:40:00.137 [2024-12-09 23:21:38.439173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.086 ms 00:40:00.137 [2024-12-09 23:21:38.439186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:00.137 [2024-12-09 23:21:38.463453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:00.137 [2024-12-09 23:21:38.463485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:40:00.137 [2024-12-09 23:21:38.463497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.208 ms 00:40:00.137 [2024-12-09 23:21:38.463506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:00.137 [2024-12-09 23:21:38.477346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:00.137 [2024-12-09 23:21:38.477376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:40:00.137 [2024-12-09 23:21:38.477388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.809 ms 00:40:00.137 [2024-12-09 23:21:38.477402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:00.137 [2024-12-09 23:21:38.529526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:00.137 [2024-12-09 23:21:38.529558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:40:00.137 [2024-12-09 23:21:38.529568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 52.091 ms 00:40:00.137 [2024-12-09 23:21:38.529578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:00.137 [2024-12-09 23:21:38.552298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:00.137 [2024-12-09 23:21:38.552327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:40:00.137 [2024-12-09 23:21:38.552338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.706 ms 00:40:00.137 [2024-12-09 23:21:38.552345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:00.137 [2024-12-09 23:21:38.574453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:00.137 [2024-12-09 23:21:38.574480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:40:00.137 [2024-12-09 23:21:38.574490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.068 ms 00:40:00.137 [2024-12-09 23:21:38.574497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:00.137 [2024-12-09 23:21:38.596461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:00.137 [2024-12-09 23:21:38.596489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:40:00.137 [2024-12-09 23:21:38.596499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.933 ms 00:40:00.137 [2024-12-09 23:21:38.596506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:00.396 [2024-12-09 23:21:38.618186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:00.396 [2024-12-09 23:21:38.618213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:40:00.396 [2024-12-09 23:21:38.618233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.629 ms 00:40:00.396 [2024-12-09 23:21:38.618240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:00.396 [2024-12-09 23:21:38.618270] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:40:00.396 [2024-12-09 23:21:38.618285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 119040 / 261120 wr_cnt: 1 state: open 00:40:00.396 [2024-12-09 23:21:38.618296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:40:00.396 [2024-12-09 23:21:38.618303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:40:00.396 [2024-12-09 23:21:38.618311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:40:00.396 [2024-12-09 23:21:38.618319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:40:00.396 [2024-12-09 23:21:38.618327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:40:00.396 [2024-12-09 23:21:38.618334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:40:00.396 [2024-12-09 23:21:38.618343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:40:00.396 [2024-12-09 23:21:38.618350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:40:00.396 [2024-12-09 23:21:38.618358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:40:00.396 [2024-12-09 23:21:38.618365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:40:00.396 [2024-12-09 23:21:38.618374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:40:00.396 [2024-12-09 23:21:38.618381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:40:00.396 [2024-12-09 23:21:38.618388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:40:00.397 [2024-12-09 23:21:38.618396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:40:00.397 [2024-12-09 23:21:38.618403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:40:00.397 [2024-12-09 23:21:38.618411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:40:00.397 [2024-12-09 23:21:38.618419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:40:00.397 [2024-12-09 23:21:38.618426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:40:00.397 [2024-12-09 23:21:38.618434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:40:00.397 [2024-12-09 23:21:38.618441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:40:00.397 [2024-12-09 23:21:38.618448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:40:00.397 [2024-12-09 23:21:38.618456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:40:00.397 [2024-12-09 23:21:38.618463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:40:00.397 [2024-12-09 23:21:38.618470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:40:00.397 [2024-12-09 23:21:38.618477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:40:00.397 [2024-12-09 23:21:38.618486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:40:00.397 [2024-12-09 23:21:38.618494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:40:00.397 [2024-12-09 23:21:38.618500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:40:00.397 [2024-12-09 23:21:38.618508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:40:00.397 [2024-12-09 23:21:38.618515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:40:00.397 [2024-12-09 23:21:38.618523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:40:00.397 [2024-12-09 23:21:38.618532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:40:00.397 [2024-12-09 23:21:38.618541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:40:00.397 [2024-12-09 23:21:38.618548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:40:00.397 [2024-12-09 23:21:38.618556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:40:00.397 [2024-12-09 23:21:38.618563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:40:00.397 [2024-12-09 23:21:38.618570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:40:00.397 [2024-12-09 23:21:38.618577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:40:00.397 [2024-12-09 23:21:38.618584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:40:00.397 [2024-12-09 23:21:38.618591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:40:00.397 [2024-12-09 23:21:38.618598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:40:00.397 [2024-12-09 23:21:38.618605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:40:00.397 [2024-12-09 23:21:38.618618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:40:00.397 [2024-12-09 23:21:38.618625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:40:00.397 [2024-12-09 23:21:38.618633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:40:00.397 [2024-12-09 23:21:38.618640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:40:00.397 [2024-12-09 23:21:38.618647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:40:00.397 [2024-12-09 23:21:38.618655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:40:00.397 [2024-12-09 23:21:38.618662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:40:00.397 [2024-12-09 23:21:38.618669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:40:00.397 [2024-12-09 23:21:38.618677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:40:00.397 [2024-12-09 23:21:38.618685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:40:00.397 [2024-12-09 23:21:38.618693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:40:00.397 [2024-12-09 23:21:38.618701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:40:00.397 [2024-12-09 23:21:38.618709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:40:00.397 [2024-12-09 23:21:38.618716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:40:00.397 [2024-12-09 23:21:38.618723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:40:00.397 [2024-12-09 23:21:38.618731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:40:00.397 [2024-12-09 23:21:38.618738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:40:00.397 [2024-12-09 23:21:38.618745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:40:00.397 [2024-12-09 23:21:38.618752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:40:00.397 [2024-12-09 23:21:38.618760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:40:00.397 [2024-12-09 23:21:38.618768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:40:00.397 [2024-12-09 23:21:38.618776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:40:00.397 [2024-12-09 23:21:38.618783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:40:00.397 [2024-12-09 23:21:38.618790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:40:00.397 [2024-12-09 23:21:38.618797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:40:00.397 [2024-12-09 23:21:38.618805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:40:00.397 [2024-12-09 23:21:38.618813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:40:00.397 [2024-12-09 23:21:38.618820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:40:00.397 [2024-12-09 23:21:38.618828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:40:00.397 [2024-12-09 23:21:38.618835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:40:00.397 [2024-12-09 23:21:38.618842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:40:00.397 [2024-12-09 23:21:38.618849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:40:00.397 [2024-12-09 23:21:38.618857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:40:00.397 [2024-12-09 23:21:38.618864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:40:00.397 [2024-12-09 23:21:38.618871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:40:00.397 [2024-12-09 23:21:38.618879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:40:00.397 [2024-12-09 23:21:38.618886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:40:00.397 [2024-12-09 23:21:38.618894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:40:00.397 [2024-12-09 23:21:38.618902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:40:00.397 [2024-12-09 23:21:38.618909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:40:00.397 [2024-12-09 23:21:38.618916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:40:00.397 [2024-12-09 23:21:38.618923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:40:00.397 [2024-12-09 23:21:38.618931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:40:00.397 [2024-12-09 23:21:38.618939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:40:00.397 [2024-12-09 23:21:38.618946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:40:00.398 [2024-12-09 23:21:38.618953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:40:00.398 [2024-12-09 23:21:38.618960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:40:00.398 [2024-12-09 23:21:38.618968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:40:00.398 [2024-12-09 23:21:38.618975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:40:00.398 [2024-12-09 23:21:38.618983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:40:00.398 [2024-12-09 23:21:38.618991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:40:00.398 [2024-12-09 23:21:38.618998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:40:00.398 [2024-12-09 23:21:38.619005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:40:00.398 [2024-12-09 23:21:38.619012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:40:00.398 [2024-12-09 23:21:38.619020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:40:00.398 [2024-12-09 23:21:38.619026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:40:00.398 [2024-12-09 23:21:38.619035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:40:00.398 [2024-12-09 23:21:38.619050] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:40:00.398 [2024-12-09 23:21:38.619059] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 37f650ee-1fad-44b0-9ad5-d0d80f0dde74 00:40:00.398 [2024-12-09 23:21:38.619067] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 119040 00:40:00.398 [2024-12-09 23:21:38.619074] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 120000 00:40:00.398 [2024-12-09 23:21:38.619081] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 119040 00:40:00.398 [2024-12-09 23:21:38.619089] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0081 00:40:00.398 [2024-12-09 23:21:38.619104] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:40:00.398 [2024-12-09 23:21:38.619112] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:40:00.398 [2024-12-09 23:21:38.619120] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:40:00.398 [2024-12-09 23:21:38.619126] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:40:00.398 [2024-12-09 23:21:38.619133] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:40:00.398 [2024-12-09 23:21:38.619140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:00.398 [2024-12-09 23:21:38.619147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:40:00.398 [2024-12-09 23:21:38.619155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.871 ms 00:40:00.398 [2024-12-09 23:21:38.619162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:00.398 [2024-12-09 23:21:38.631866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:00.398 [2024-12-09 23:21:38.631893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:40:00.398 [2024-12-09 23:21:38.631908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.689 ms 00:40:00.398 [2024-12-09 23:21:38.631916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:00.398 [2024-12-09 23:21:38.632279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:00.398 [2024-12-09 23:21:38.632299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:40:00.398 [2024-12-09 23:21:38.632308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.346 ms 00:40:00.398 [2024-12-09 23:21:38.632316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:00.398 [2024-12-09 23:21:38.666816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:00.398 [2024-12-09 23:21:38.666847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:40:00.398 [2024-12-09 23:21:38.666857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:00.398 [2024-12-09 23:21:38.666865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:00.398 [2024-12-09 23:21:38.666916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:00.398 [2024-12-09 23:21:38.666925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:40:00.398 [2024-12-09 23:21:38.666932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:00.398 [2024-12-09 23:21:38.666940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:00.398 [2024-12-09 23:21:38.666987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:00.398 [2024-12-09 23:21:38.667001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:40:00.398 [2024-12-09 23:21:38.667010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:00.398 [2024-12-09 23:21:38.667017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:00.398 [2024-12-09 23:21:38.667032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:00.398 [2024-12-09 23:21:38.667040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:40:00.398 [2024-12-09 23:21:38.667047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:00.398 [2024-12-09 23:21:38.667054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:00.398 [2024-12-09 23:21:38.746330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:00.398 [2024-12-09 23:21:38.746371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:40:00.398 [2024-12-09 23:21:38.746382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:00.398 [2024-12-09 23:21:38.746390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:00.398 [2024-12-09 23:21:38.811531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:00.398 [2024-12-09 23:21:38.811571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:40:00.398 [2024-12-09 23:21:38.811583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:00.398 [2024-12-09 23:21:38.811591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:00.398 [2024-12-09 23:21:38.811672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:00.398 [2024-12-09 23:21:38.811682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:40:00.398 [2024-12-09 23:21:38.811690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:00.398 [2024-12-09 23:21:38.811701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:00.398 [2024-12-09 23:21:38.811736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:00.398 [2024-12-09 23:21:38.811745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:40:00.398 [2024-12-09 23:21:38.811753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:00.398 [2024-12-09 23:21:38.811761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:00.398 [2024-12-09 23:21:38.811850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:00.398 [2024-12-09 23:21:38.811860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:40:00.398 [2024-12-09 23:21:38.811868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:00.398 [2024-12-09 23:21:38.811879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:00.398 [2024-12-09 23:21:38.811908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:00.398 [2024-12-09 23:21:38.811917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:40:00.398 [2024-12-09 23:21:38.811925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:00.398 [2024-12-09 23:21:38.811933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:00.398 [2024-12-09 23:21:38.811970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:00.398 [2024-12-09 23:21:38.811981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:40:00.398 [2024-12-09 23:21:38.811988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:00.398 [2024-12-09 23:21:38.811996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:00.398 [2024-12-09 23:21:38.812041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:00.398 [2024-12-09 23:21:38.812059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:40:00.398 [2024-12-09 23:21:38.812067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:00.398 [2024-12-09 23:21:38.812075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:00.398 [2024-12-09 23:21:38.812198] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 417.020 ms, result 0 00:40:02.298 00:40:02.298 00:40:02.298 23:21:40 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:40:02.298 [2024-12-09 23:21:40.468789] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:40:02.298 [2024-12-09 23:21:40.468907] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79732 ] 00:40:02.298 [2024-12-09 23:21:40.628383] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:02.298 [2024-12-09 23:21:40.745008] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:02.865 [2024-12-09 23:21:41.021372] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:40:02.865 [2024-12-09 23:21:41.021447] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:40:02.865 [2024-12-09 23:21:41.175971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:02.865 [2024-12-09 23:21:41.176021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:40:02.865 [2024-12-09 23:21:41.176034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:40:02.865 [2024-12-09 23:21:41.176043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:02.865 [2024-12-09 23:21:41.176088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:02.865 [2024-12-09 23:21:41.176102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:40:02.865 [2024-12-09 23:21:41.176110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:40:02.865 [2024-12-09 23:21:41.176117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:02.865 [2024-12-09 23:21:41.176138] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:40:02.865 [2024-12-09 23:21:41.176805] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:40:02.865 [2024-12-09 23:21:41.176828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:02.865 [2024-12-09 23:21:41.176836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:40:02.865 [2024-12-09 23:21:41.176845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.696 ms 00:40:02.865 [2024-12-09 23:21:41.176852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:02.865 [2024-12-09 23:21:41.178158] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:40:02.865 [2024-12-09 23:21:41.190990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:02.865 [2024-12-09 23:21:41.191025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:40:02.865 [2024-12-09 23:21:41.191037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.833 ms 00:40:02.865 [2024-12-09 23:21:41.191047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:02.865 [2024-12-09 23:21:41.191103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:02.865 [2024-12-09 23:21:41.191113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:40:02.865 [2024-12-09 23:21:41.191122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:40:02.865 [2024-12-09 23:21:41.191129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:02.865 [2024-12-09 23:21:41.197434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:02.865 [2024-12-09 23:21:41.197463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:40:02.865 [2024-12-09 23:21:41.197473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.249 ms 00:40:02.865 [2024-12-09 23:21:41.197485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:02.865 [2024-12-09 23:21:41.197557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:02.865 [2024-12-09 23:21:41.197568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:40:02.865 [2024-12-09 23:21:41.197576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:40:02.865 [2024-12-09 23:21:41.197583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:02.865 [2024-12-09 23:21:41.197620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:02.865 [2024-12-09 23:21:41.197629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:40:02.865 [2024-12-09 23:21:41.197637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:40:02.865 [2024-12-09 23:21:41.197645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:02.865 [2024-12-09 23:21:41.197669] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:40:02.865 [2024-12-09 23:21:41.201141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:02.865 [2024-12-09 23:21:41.201169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:40:02.865 [2024-12-09 23:21:41.201181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.477 ms 00:40:02.865 [2024-12-09 23:21:41.201189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:02.865 [2024-12-09 23:21:41.201231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:02.865 [2024-12-09 23:21:41.201241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:40:02.865 [2024-12-09 23:21:41.201250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:40:02.865 [2024-12-09 23:21:41.201257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:02.865 [2024-12-09 23:21:41.201284] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:40:02.865 [2024-12-09 23:21:41.201305] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:40:02.865 [2024-12-09 23:21:41.201340] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:40:02.865 [2024-12-09 23:21:41.201360] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:40:02.866 [2024-12-09 23:21:41.201473] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:40:02.866 [2024-12-09 23:21:41.201489] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:40:02.866 [2024-12-09 23:21:41.201503] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:40:02.866 [2024-12-09 23:21:41.201514] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:40:02.866 [2024-12-09 23:21:41.201523] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:40:02.866 [2024-12-09 23:21:41.201531] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:40:02.866 [2024-12-09 23:21:41.201539] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:40:02.866 [2024-12-09 23:21:41.201549] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:40:02.866 [2024-12-09 23:21:41.201556] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:40:02.866 [2024-12-09 23:21:41.201564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:02.866 [2024-12-09 23:21:41.201572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:40:02.866 [2024-12-09 23:21:41.201581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.283 ms 00:40:02.866 [2024-12-09 23:21:41.201588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:02.866 [2024-12-09 23:21:41.201677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:02.866 [2024-12-09 23:21:41.201686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:40:02.866 [2024-12-09 23:21:41.201694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:40:02.866 [2024-12-09 23:21:41.201701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:02.866 [2024-12-09 23:21:41.201815] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:40:02.866 [2024-12-09 23:21:41.201832] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:40:02.866 [2024-12-09 23:21:41.201841] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:40:02.866 [2024-12-09 23:21:41.201849] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:02.866 [2024-12-09 23:21:41.201858] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:40:02.866 [2024-12-09 23:21:41.201865] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:40:02.866 [2024-12-09 23:21:41.201872] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:40:02.866 [2024-12-09 23:21:41.201879] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:40:02.866 [2024-12-09 23:21:41.201887] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:40:02.866 [2024-12-09 23:21:41.201894] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:40:02.866 [2024-12-09 23:21:41.201901] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:40:02.866 [2024-12-09 23:21:41.201908] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:40:02.866 [2024-12-09 23:21:41.201915] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:40:02.866 [2024-12-09 23:21:41.201928] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:40:02.866 [2024-12-09 23:21:41.201936] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:40:02.866 [2024-12-09 23:21:41.201942] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:02.866 [2024-12-09 23:21:41.201949] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:40:02.866 [2024-12-09 23:21:41.201956] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:40:02.866 [2024-12-09 23:21:41.201963] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:02.866 [2024-12-09 23:21:41.201970] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:40:02.866 [2024-12-09 23:21:41.201977] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:40:02.866 [2024-12-09 23:21:41.201983] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:40:02.866 [2024-12-09 23:21:41.201990] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:40:02.866 [2024-12-09 23:21:41.201998] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:40:02.866 [2024-12-09 23:21:41.202004] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:40:02.866 [2024-12-09 23:21:41.202011] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:40:02.866 [2024-12-09 23:21:41.202017] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:40:02.866 [2024-12-09 23:21:41.202024] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:40:02.866 [2024-12-09 23:21:41.202030] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:40:02.866 [2024-12-09 23:21:41.202037] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:40:02.866 [2024-12-09 23:21:41.202043] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:40:02.866 [2024-12-09 23:21:41.202050] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:40:02.866 [2024-12-09 23:21:41.202056] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:40:02.866 [2024-12-09 23:21:41.202063] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:40:02.866 [2024-12-09 23:21:41.202070] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:40:02.866 [2024-12-09 23:21:41.202077] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:40:02.866 [2024-12-09 23:21:41.202083] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:40:02.866 [2024-12-09 23:21:41.202089] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:40:02.866 [2024-12-09 23:21:41.202096] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:40:02.866 [2024-12-09 23:21:41.202102] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:02.866 [2024-12-09 23:21:41.202109] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:40:02.866 [2024-12-09 23:21:41.202115] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:40:02.866 [2024-12-09 23:21:41.202122] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:02.866 [2024-12-09 23:21:41.202129] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:40:02.866 [2024-12-09 23:21:41.202136] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:40:02.866 [2024-12-09 23:21:41.202143] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:40:02.866 [2024-12-09 23:21:41.202150] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:02.866 [2024-12-09 23:21:41.202157] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:40:02.866 [2024-12-09 23:21:41.202164] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:40:02.866 [2024-12-09 23:21:41.202171] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:40:02.866 [2024-12-09 23:21:41.202180] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:40:02.866 [2024-12-09 23:21:41.202186] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:40:02.866 [2024-12-09 23:21:41.202193] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:40:02.866 [2024-12-09 23:21:41.202201] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:40:02.866 [2024-12-09 23:21:41.202211] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:40:02.866 [2024-12-09 23:21:41.202232] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:40:02.866 [2024-12-09 23:21:41.202240] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:40:02.866 [2024-12-09 23:21:41.202247] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:40:02.866 [2024-12-09 23:21:41.202255] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:40:02.866 [2024-12-09 23:21:41.202263] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:40:02.866 [2024-12-09 23:21:41.202270] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:40:02.866 [2024-12-09 23:21:41.202277] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:40:02.866 [2024-12-09 23:21:41.202285] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:40:02.866 [2024-12-09 23:21:41.202292] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:40:02.866 [2024-12-09 23:21:41.202300] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:40:02.866 [2024-12-09 23:21:41.202307] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:40:02.866 [2024-12-09 23:21:41.202313] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:40:02.866 [2024-12-09 23:21:41.202320] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:40:02.866 [2024-12-09 23:21:41.202328] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:40:02.866 [2024-12-09 23:21:41.202335] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:40:02.866 [2024-12-09 23:21:41.202344] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:40:02.866 [2024-12-09 23:21:41.202351] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:40:02.866 [2024-12-09 23:21:41.202359] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:40:02.866 [2024-12-09 23:21:41.202367] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:40:02.866 [2024-12-09 23:21:41.202375] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:40:02.866 [2024-12-09 23:21:41.202382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:02.866 [2024-12-09 23:21:41.202389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:40:02.866 [2024-12-09 23:21:41.202397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.639 ms 00:40:02.866 [2024-12-09 23:21:41.202405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:02.866 [2024-12-09 23:21:41.230908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:02.866 [2024-12-09 23:21:41.230942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:40:02.866 [2024-12-09 23:21:41.230953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.462 ms 00:40:02.866 [2024-12-09 23:21:41.230964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:02.866 [2024-12-09 23:21:41.231049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:02.866 [2024-12-09 23:21:41.231058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:40:02.867 [2024-12-09 23:21:41.231066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:40:02.867 [2024-12-09 23:21:41.231073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:02.867 [2024-12-09 23:21:41.276427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:02.867 [2024-12-09 23:21:41.276466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:40:02.867 [2024-12-09 23:21:41.276478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.307 ms 00:40:02.867 [2024-12-09 23:21:41.276486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:02.867 [2024-12-09 23:21:41.276526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:02.867 [2024-12-09 23:21:41.276537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:40:02.867 [2024-12-09 23:21:41.276549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:40:02.867 [2024-12-09 23:21:41.276556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:02.867 [2024-12-09 23:21:41.276996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:02.867 [2024-12-09 23:21:41.277021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:40:02.867 [2024-12-09 23:21:41.277030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.375 ms 00:40:02.867 [2024-12-09 23:21:41.277038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:02.867 [2024-12-09 23:21:41.277174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:02.867 [2024-12-09 23:21:41.277189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:40:02.867 [2024-12-09 23:21:41.277200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.118 ms 00:40:02.867 [2024-12-09 23:21:41.277208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:02.867 [2024-12-09 23:21:41.291257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:02.867 [2024-12-09 23:21:41.291287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:40:02.867 [2024-12-09 23:21:41.291297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.008 ms 00:40:02.867 [2024-12-09 23:21:41.291305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:02.867 [2024-12-09 23:21:41.304197] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:40:02.867 [2024-12-09 23:21:41.304235] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:40:02.867 [2024-12-09 23:21:41.304246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:02.867 [2024-12-09 23:21:41.304255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:40:02.867 [2024-12-09 23:21:41.304264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.853 ms 00:40:02.867 [2024-12-09 23:21:41.304272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:03.125 [2024-12-09 23:21:41.328561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:03.125 [2024-12-09 23:21:41.328592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:40:03.125 [2024-12-09 23:21:41.328603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.251 ms 00:40:03.125 [2024-12-09 23:21:41.328612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:03.125 [2024-12-09 23:21:41.339882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:03.125 [2024-12-09 23:21:41.339910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:40:03.125 [2024-12-09 23:21:41.339920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.236 ms 00:40:03.125 [2024-12-09 23:21:41.339927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:03.125 [2024-12-09 23:21:41.350942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:03.125 [2024-12-09 23:21:41.350970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:40:03.125 [2024-12-09 23:21:41.350980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.984 ms 00:40:03.125 [2024-12-09 23:21:41.350987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:03.125 [2024-12-09 23:21:41.351610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:03.125 [2024-12-09 23:21:41.351634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:40:03.125 [2024-12-09 23:21:41.351647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.546 ms 00:40:03.125 [2024-12-09 23:21:41.351655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:03.125 [2024-12-09 23:21:41.409497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:03.125 [2024-12-09 23:21:41.409537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:40:03.125 [2024-12-09 23:21:41.409553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.825 ms 00:40:03.125 [2024-12-09 23:21:41.409561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:03.125 [2024-12-09 23:21:41.420065] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:40:03.125 [2024-12-09 23:21:41.422627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:03.125 [2024-12-09 23:21:41.422656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:40:03.125 [2024-12-09 23:21:41.422668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.026 ms 00:40:03.125 [2024-12-09 23:21:41.422678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:03.125 [2024-12-09 23:21:41.422762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:03.125 [2024-12-09 23:21:41.422773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:40:03.125 [2024-12-09 23:21:41.422785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:40:03.125 [2024-12-09 23:21:41.422793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:03.125 [2024-12-09 23:21:41.424369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:03.125 [2024-12-09 23:21:41.424399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:40:03.125 [2024-12-09 23:21:41.424409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.538 ms 00:40:03.125 [2024-12-09 23:21:41.424417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:03.125 [2024-12-09 23:21:41.424439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:03.125 [2024-12-09 23:21:41.424449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:40:03.125 [2024-12-09 23:21:41.424457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:40:03.125 [2024-12-09 23:21:41.424465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:03.125 [2024-12-09 23:21:41.424502] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:40:03.125 [2024-12-09 23:21:41.424513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:03.126 [2024-12-09 23:21:41.424522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:40:03.126 [2024-12-09 23:21:41.424530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:40:03.126 [2024-12-09 23:21:41.424538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:03.126 [2024-12-09 23:21:41.447912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:03.126 [2024-12-09 23:21:41.447942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:40:03.126 [2024-12-09 23:21:41.447956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.356 ms 00:40:03.126 [2024-12-09 23:21:41.447965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:03.126 [2024-12-09 23:21:41.448032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:03.126 [2024-12-09 23:21:41.448041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:40:03.126 [2024-12-09 23:21:41.448050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:40:03.126 [2024-12-09 23:21:41.448057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:03.126 [2024-12-09 23:21:41.449034] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 272.631 ms, result 0 00:40:04.501  [2024-12-09T23:21:43.899Z] Copying: 41/1024 [MB] (41 MBps) [2024-12-09T23:21:44.839Z] Copying: 87/1024 [MB] (46 MBps) [2024-12-09T23:21:45.782Z] Copying: 136/1024 [MB] (49 MBps) [2024-12-09T23:21:46.832Z] Copying: 184/1024 [MB] (47 MBps) [2024-12-09T23:21:47.767Z] Copying: 234/1024 [MB] (50 MBps) [2024-12-09T23:21:48.710Z] Copying: 283/1024 [MB] (49 MBps) [2024-12-09T23:21:49.647Z] Copying: 329/1024 [MB] (45 MBps) [2024-12-09T23:21:51.032Z] Copying: 380/1024 [MB] (51 MBps) [2024-12-09T23:21:51.973Z] Copying: 428/1024 [MB] (48 MBps) [2024-12-09T23:21:52.915Z] Copying: 475/1024 [MB] (46 MBps) [2024-12-09T23:21:53.857Z] Copying: 523/1024 [MB] (47 MBps) [2024-12-09T23:21:54.798Z] Copying: 561/1024 [MB] (38 MBps) [2024-12-09T23:21:55.737Z] Copying: 608/1024 [MB] (47 MBps) [2024-12-09T23:21:56.675Z] Copying: 655/1024 [MB] (47 MBps) [2024-12-09T23:21:58.092Z] Copying: 705/1024 [MB] (49 MBps) [2024-12-09T23:21:58.661Z] Copying: 756/1024 [MB] (50 MBps) [2024-12-09T23:22:00.036Z] Copying: 799/1024 [MB] (42 MBps) [2024-12-09T23:22:00.969Z] Copying: 845/1024 [MB] (45 MBps) [2024-12-09T23:22:01.907Z] Copying: 891/1024 [MB] (46 MBps) [2024-12-09T23:22:02.849Z] Copying: 910/1024 [MB] (19 MBps) [2024-12-09T23:22:03.783Z] Copying: 927/1024 [MB] (16 MBps) [2024-12-09T23:22:04.719Z] Copying: 949/1024 [MB] (21 MBps) [2024-12-09T23:22:05.654Z] Copying: 973/1024 [MB] (24 MBps) [2024-12-09T23:22:06.590Z] Copying: 990/1024 [MB] (16 MBps) [2024-12-09T23:22:07.158Z] Copying: 1024/1024 [MB] (average 41 MBps)[2024-12-09 23:22:07.000277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:28.696 [2024-12-09 23:22:07.000357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:40:28.696 [2024-12-09 23:22:07.000393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:40:28.696 [2024-12-09 23:22:07.000408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:28.696 [2024-12-09 23:22:07.000445] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:40:28.696 [2024-12-09 23:22:07.007747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:28.696 [2024-12-09 23:22:07.007797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:40:28.696 [2024-12-09 23:22:07.007815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.277 ms 00:40:28.696 [2024-12-09 23:22:07.007829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:28.696 [2024-12-09 23:22:07.008202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:28.696 [2024-12-09 23:22:07.008247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:40:28.696 [2024-12-09 23:22:07.008262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.340 ms 00:40:28.696 [2024-12-09 23:22:07.008282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:28.696 [2024-12-09 23:22:07.013228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:28.696 [2024-12-09 23:22:07.013258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:40:28.696 [2024-12-09 23:22:07.013268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.922 ms 00:40:28.696 [2024-12-09 23:22:07.013276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:28.696 [2024-12-09 23:22:07.019356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:28.696 [2024-12-09 23:22:07.019384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:40:28.696 [2024-12-09 23:22:07.019393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.051 ms 00:40:28.696 [2024-12-09 23:22:07.019406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:28.696 [2024-12-09 23:22:07.042588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:28.696 [2024-12-09 23:22:07.042620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:40:28.696 [2024-12-09 23:22:07.042631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.147 ms 00:40:28.696 [2024-12-09 23:22:07.042638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:28.696 [2024-12-09 23:22:07.056112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:28.696 [2024-12-09 23:22:07.056144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:40:28.696 [2024-12-09 23:22:07.056156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.443 ms 00:40:28.696 [2024-12-09 23:22:07.056163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:28.696 [2024-12-09 23:22:07.127147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:28.696 [2024-12-09 23:22:07.127190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:40:28.696 [2024-12-09 23:22:07.127203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 70.949 ms 00:40:28.696 [2024-12-09 23:22:07.127211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:28.696 [2024-12-09 23:22:07.150077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:28.696 [2024-12-09 23:22:07.150111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:40:28.696 [2024-12-09 23:22:07.150121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.840 ms 00:40:28.696 [2024-12-09 23:22:07.150129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:28.958 [2024-12-09 23:22:07.172746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:28.958 [2024-12-09 23:22:07.172776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:40:28.958 [2024-12-09 23:22:07.172786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.586 ms 00:40:28.958 [2024-12-09 23:22:07.172793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:28.958 [2024-12-09 23:22:07.194782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:28.958 [2024-12-09 23:22:07.194811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:40:28.958 [2024-12-09 23:22:07.194821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.960 ms 00:40:28.958 [2024-12-09 23:22:07.194829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:28.958 [2024-12-09 23:22:07.216940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:28.958 [2024-12-09 23:22:07.216971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:40:28.958 [2024-12-09 23:22:07.216980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.062 ms 00:40:28.958 [2024-12-09 23:22:07.216987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:28.958 [2024-12-09 23:22:07.217016] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:40:28.958 [2024-12-09 23:22:07.217030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:40:28.958 [2024-12-09 23:22:07.217039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:40:28.958 [2024-12-09 23:22:07.217048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:40:28.958 [2024-12-09 23:22:07.217055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:40:28.958 [2024-12-09 23:22:07.217063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:40:28.958 [2024-12-09 23:22:07.217070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:40:28.958 [2024-12-09 23:22:07.217078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:40:28.958 [2024-12-09 23:22:07.217085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:40:28.958 [2024-12-09 23:22:07.217092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:40:28.958 [2024-12-09 23:22:07.217100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:40:28.958 [2024-12-09 23:22:07.217107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:40:28.958 [2024-12-09 23:22:07.217114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:40:28.958 [2024-12-09 23:22:07.217122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:40:28.958 [2024-12-09 23:22:07.217129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:40:28.958 [2024-12-09 23:22:07.217136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:40:28.958 [2024-12-09 23:22:07.217143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:40:28.958 [2024-12-09 23:22:07.217151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:40:28.958 [2024-12-09 23:22:07.217158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:40:28.958 [2024-12-09 23:22:07.217165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:40:28.958 [2024-12-09 23:22:07.217172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:40:28.958 [2024-12-09 23:22:07.217179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:40:28.958 [2024-12-09 23:22:07.217186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:40:28.958 [2024-12-09 23:22:07.217193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:40:28.958 [2024-12-09 23:22:07.217200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:40:28.958 [2024-12-09 23:22:07.217207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:40:28.958 [2024-12-09 23:22:07.217215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:40:28.958 [2024-12-09 23:22:07.217238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:40:28.958 [2024-12-09 23:22:07.217245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:40:28.958 [2024-12-09 23:22:07.217253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:40:28.958 [2024-12-09 23:22:07.217260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:40:28.958 [2024-12-09 23:22:07.217268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:40:28.958 [2024-12-09 23:22:07.217275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:40:28.958 [2024-12-09 23:22:07.217283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:40:28.958 [2024-12-09 23:22:07.217291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:40:28.958 [2024-12-09 23:22:07.217298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:40:28.958 [2024-12-09 23:22:07.217306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:40:28.958 [2024-12-09 23:22:07.217313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:40:28.958 [2024-12-09 23:22:07.217320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:40:28.958 [2024-12-09 23:22:07.217328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:40:28.958 [2024-12-09 23:22:07.217335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:40:28.958 [2024-12-09 23:22:07.217342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:40:28.958 [2024-12-09 23:22:07.217349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:40:28.958 [2024-12-09 23:22:07.217357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:40:28.958 [2024-12-09 23:22:07.217364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:40:28.958 [2024-12-09 23:22:07.217371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:40:28.958 [2024-12-09 23:22:07.217379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:40:28.958 [2024-12-09 23:22:07.217386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:40:28.958 [2024-12-09 23:22:07.217400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:40:28.958 [2024-12-09 23:22:07.217407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:40:28.958 [2024-12-09 23:22:07.217415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:40:28.958 [2024-12-09 23:22:07.217422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:40:28.958 [2024-12-09 23:22:07.217429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:40:28.958 [2024-12-09 23:22:07.217436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:40:28.958 [2024-12-09 23:22:07.217444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:40:28.958 [2024-12-09 23:22:07.217451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:40:28.958 [2024-12-09 23:22:07.217458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:40:28.958 [2024-12-09 23:22:07.217465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:40:28.958 [2024-12-09 23:22:07.217473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:40:28.958 [2024-12-09 23:22:07.217480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:40:28.958 [2024-12-09 23:22:07.217487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:40:28.958 [2024-12-09 23:22:07.217494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:40:28.958 [2024-12-09 23:22:07.217501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:40:28.958 [2024-12-09 23:22:07.217509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:40:28.958 [2024-12-09 23:22:07.217517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:40:28.958 [2024-12-09 23:22:07.217525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:40:28.958 [2024-12-09 23:22:07.217533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:40:28.959 [2024-12-09 23:22:07.217540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:40:28.959 [2024-12-09 23:22:07.217548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:40:28.959 [2024-12-09 23:22:07.217555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:40:28.959 [2024-12-09 23:22:07.217562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:40:28.959 [2024-12-09 23:22:07.217570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:40:28.959 [2024-12-09 23:22:07.217577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:40:28.959 [2024-12-09 23:22:07.217584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:40:28.959 [2024-12-09 23:22:07.217591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:40:28.959 [2024-12-09 23:22:07.217599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:40:28.959 [2024-12-09 23:22:07.217607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:40:28.959 [2024-12-09 23:22:07.217614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:40:28.959 [2024-12-09 23:22:07.217620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:40:28.959 [2024-12-09 23:22:07.217628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:40:28.959 [2024-12-09 23:22:07.217635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:40:28.959 [2024-12-09 23:22:07.217646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:40:28.959 [2024-12-09 23:22:07.217654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:40:28.959 [2024-12-09 23:22:07.217661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:40:28.959 [2024-12-09 23:22:07.217668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:40:28.959 [2024-12-09 23:22:07.217675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:40:28.959 [2024-12-09 23:22:07.217683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:40:28.959 [2024-12-09 23:22:07.217690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:40:28.959 [2024-12-09 23:22:07.217697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:40:28.959 [2024-12-09 23:22:07.217705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:40:28.959 [2024-12-09 23:22:07.217712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:40:28.959 [2024-12-09 23:22:07.217720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:40:28.959 [2024-12-09 23:22:07.217727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:40:28.959 [2024-12-09 23:22:07.217735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:40:28.959 [2024-12-09 23:22:07.217742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:40:28.959 [2024-12-09 23:22:07.217749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:40:28.959 [2024-12-09 23:22:07.217756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:40:28.959 [2024-12-09 23:22:07.217764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:40:28.959 [2024-12-09 23:22:07.217771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:40:28.959 [2024-12-09 23:22:07.217779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:40:28.959 [2024-12-09 23:22:07.217787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:40:28.959 [2024-12-09 23:22:07.217802] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:40:28.959 [2024-12-09 23:22:07.217809] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 37f650ee-1fad-44b0-9ad5-d0d80f0dde74 00:40:28.959 [2024-12-09 23:22:07.217817] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:40:28.959 [2024-12-09 23:22:07.217824] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 12992 00:40:28.959 [2024-12-09 23:22:07.217831] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 12032 00:40:28.959 [2024-12-09 23:22:07.217838] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0798 00:40:28.959 [2024-12-09 23:22:07.217847] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:40:28.959 [2024-12-09 23:22:07.217859] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:40:28.959 [2024-12-09 23:22:07.217866] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:40:28.959 [2024-12-09 23:22:07.217873] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:40:28.959 [2024-12-09 23:22:07.217879] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:40:28.959 [2024-12-09 23:22:07.217886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:28.959 [2024-12-09 23:22:07.217893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:40:28.959 [2024-12-09 23:22:07.217901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.870 ms 00:40:28.959 [2024-12-09 23:22:07.217908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:28.959 [2024-12-09 23:22:07.230094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:28.959 [2024-12-09 23:22:07.230124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:40:28.959 [2024-12-09 23:22:07.230138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.171 ms 00:40:28.959 [2024-12-09 23:22:07.230146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:28.959 [2024-12-09 23:22:07.230487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:28.959 [2024-12-09 23:22:07.230507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:40:28.959 [2024-12-09 23:22:07.230515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.325 ms 00:40:28.959 [2024-12-09 23:22:07.230522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:28.959 [2024-12-09 23:22:07.262538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:28.959 [2024-12-09 23:22:07.262575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:40:28.959 [2024-12-09 23:22:07.262584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:28.959 [2024-12-09 23:22:07.262591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:28.959 [2024-12-09 23:22:07.262642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:28.959 [2024-12-09 23:22:07.262650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:40:28.959 [2024-12-09 23:22:07.262658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:28.959 [2024-12-09 23:22:07.262665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:28.959 [2024-12-09 23:22:07.262714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:28.959 [2024-12-09 23:22:07.262723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:40:28.959 [2024-12-09 23:22:07.262734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:28.959 [2024-12-09 23:22:07.262741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:28.959 [2024-12-09 23:22:07.262756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:28.959 [2024-12-09 23:22:07.262763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:40:28.959 [2024-12-09 23:22:07.262771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:28.959 [2024-12-09 23:22:07.262778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:28.959 [2024-12-09 23:22:07.337749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:28.959 [2024-12-09 23:22:07.337794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:40:28.959 [2024-12-09 23:22:07.337806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:28.959 [2024-12-09 23:22:07.337813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:28.959 [2024-12-09 23:22:07.399411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:28.959 [2024-12-09 23:22:07.399451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:40:28.959 [2024-12-09 23:22:07.399460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:28.959 [2024-12-09 23:22:07.399468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:28.959 [2024-12-09 23:22:07.399525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:28.959 [2024-12-09 23:22:07.399534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:40:28.959 [2024-12-09 23:22:07.399542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:28.959 [2024-12-09 23:22:07.399552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:28.959 [2024-12-09 23:22:07.399586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:28.959 [2024-12-09 23:22:07.399594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:40:28.959 [2024-12-09 23:22:07.399602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:28.959 [2024-12-09 23:22:07.399609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:28.959 [2024-12-09 23:22:07.399689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:28.959 [2024-12-09 23:22:07.399699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:40:28.959 [2024-12-09 23:22:07.399706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:28.959 [2024-12-09 23:22:07.399713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:28.959 [2024-12-09 23:22:07.399743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:28.959 [2024-12-09 23:22:07.399752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:40:28.959 [2024-12-09 23:22:07.399759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:28.959 [2024-12-09 23:22:07.399766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:28.959 [2024-12-09 23:22:07.399799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:28.959 [2024-12-09 23:22:07.399808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:40:28.959 [2024-12-09 23:22:07.399815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:28.959 [2024-12-09 23:22:07.399822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:28.959 [2024-12-09 23:22:07.399864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:28.959 [2024-12-09 23:22:07.399873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:40:28.959 [2024-12-09 23:22:07.399881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:28.959 [2024-12-09 23:22:07.399888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:28.959 [2024-12-09 23:22:07.399996] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 399.732 ms, result 0 00:40:29.897 00:40:29.897 00:40:29.897 23:22:08 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:40:32.438 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:40:32.438 23:22:10 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:40:32.438 23:22:10 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:40:32.438 23:22:10 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:40:32.438 23:22:10 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:40:32.438 23:22:10 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:40:32.438 23:22:10 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 77441 00:40:32.438 23:22:10 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 77441 ']' 00:40:32.438 Process with pid 77441 is not found 00:40:32.438 23:22:10 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 77441 00:40:32.439 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (77441) - No such process 00:40:32.439 23:22:10 ftl.ftl_restore -- common/autotest_common.sh@981 -- # echo 'Process with pid 77441 is not found' 00:40:32.439 Remove shared memory files 00:40:32.439 23:22:10 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:40:32.439 23:22:10 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:40:32.439 23:22:10 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:40:32.439 23:22:10 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:40:32.439 23:22:10 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:40:32.439 23:22:10 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:40:32.439 23:22:10 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:40:32.439 00:40:32.439 real 4m21.173s 00:40:32.439 user 4m8.290s 00:40:32.439 sys 0m12.813s 00:40:32.439 23:22:10 ftl.ftl_restore -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:32.439 ************************************ 00:40:32.439 23:22:10 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:40:32.439 END TEST ftl_restore 00:40:32.439 ************************************ 00:40:32.439 23:22:10 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:40:32.439 23:22:10 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:40:32.439 23:22:10 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:32.439 23:22:10 ftl -- common/autotest_common.sh@10 -- # set +x 00:40:32.439 ************************************ 00:40:32.439 START TEST ftl_dirty_shutdown 00:40:32.439 ************************************ 00:40:32.439 23:22:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:40:32.439 * Looking for test storage... 00:40:32.439 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:40:32.439 23:22:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:40:32.439 23:22:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:40:32.439 23:22:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:40:32.439 23:22:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:40:32.439 23:22:10 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:32.439 23:22:10 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:32.439 23:22:10 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:32.439 23:22:10 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:40:32.439 23:22:10 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:40:32.439 23:22:10 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:40:32.439 23:22:10 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:40:32.439 23:22:10 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:40:32.439 23:22:10 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:40:32.439 23:22:10 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:40:32.439 23:22:10 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:32.439 23:22:10 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:40:32.439 23:22:10 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:40:32.439 23:22:10 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:32.439 23:22:10 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:32.439 23:22:10 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:40:32.439 23:22:10 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:40:32.439 23:22:10 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:32.439 23:22:10 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:40:32.439 23:22:10 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:40:32.439 23:22:10 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:40:32.439 23:22:10 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:40:32.439 23:22:10 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:32.439 23:22:10 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:40:32.439 23:22:10 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:40:32.439 23:22:10 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:32.439 23:22:10 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:32.439 23:22:10 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:40:32.439 23:22:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:32.439 23:22:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:40:32.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:32.439 --rc genhtml_branch_coverage=1 00:40:32.439 --rc genhtml_function_coverage=1 00:40:32.439 --rc genhtml_legend=1 00:40:32.439 --rc geninfo_all_blocks=1 00:40:32.439 --rc geninfo_unexecuted_blocks=1 00:40:32.439 00:40:32.439 ' 00:40:32.439 23:22:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:40:32.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:32.439 --rc genhtml_branch_coverage=1 00:40:32.439 --rc genhtml_function_coverage=1 00:40:32.439 --rc genhtml_legend=1 00:40:32.439 --rc geninfo_all_blocks=1 00:40:32.439 --rc geninfo_unexecuted_blocks=1 00:40:32.439 00:40:32.439 ' 00:40:32.439 23:22:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:40:32.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:32.439 --rc genhtml_branch_coverage=1 00:40:32.439 --rc genhtml_function_coverage=1 00:40:32.439 --rc genhtml_legend=1 00:40:32.439 --rc geninfo_all_blocks=1 00:40:32.439 --rc geninfo_unexecuted_blocks=1 00:40:32.439 00:40:32.439 ' 00:40:32.439 23:22:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:40:32.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:32.439 --rc genhtml_branch_coverage=1 00:40:32.439 --rc genhtml_function_coverage=1 00:40:32.439 --rc genhtml_legend=1 00:40:32.439 --rc geninfo_all_blocks=1 00:40:32.439 --rc geninfo_unexecuted_blocks=1 00:40:32.439 00:40:32.439 ' 00:40:32.439 23:22:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:40:32.439 23:22:10 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:40:32.439 23:22:10 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:40:32.439 23:22:10 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:40:32.439 23:22:10 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:40:32.439 23:22:10 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:40:32.439 23:22:10 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:40:32.439 23:22:10 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:40:32.439 23:22:10 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:40:32.439 23:22:10 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:40:32.439 23:22:10 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:40:32.439 23:22:10 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:40:32.439 23:22:10 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:40:32.439 23:22:10 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:40:32.439 23:22:10 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:40:32.439 23:22:10 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:40:32.439 23:22:10 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:40:32.439 23:22:10 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:40:32.439 23:22:10 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:40:32.439 23:22:10 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:40:32.439 23:22:10 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:40:32.439 23:22:10 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:40:32.439 23:22:10 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:40:32.439 23:22:10 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:40:32.439 23:22:10 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:40:32.439 23:22:10 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:40:32.439 23:22:10 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:40:32.439 23:22:10 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:40:32.439 23:22:10 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:40:32.439 23:22:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:40:32.439 23:22:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:40:32.439 23:22:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:40:32.439 23:22:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:40:32.439 23:22:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:40:32.439 23:22:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:40:32.439 23:22:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:40:32.439 23:22:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:40:32.439 23:22:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:40:32.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:32.439 23:22:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:40:32.439 23:22:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:40:32.439 23:22:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:40:32.439 23:22:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:40:32.439 23:22:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=80109 00:40:32.439 23:22:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:40:32.439 23:22:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 80109 00:40:32.439 23:22:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # '[' -z 80109 ']' 00:40:32.439 23:22:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:32.439 23:22:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:32.440 23:22:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:32.440 23:22:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:32.440 23:22:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:40:32.440 [2024-12-09 23:22:10.668253] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:40:32.440 [2024-12-09 23:22:10.668563] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80109 ] 00:40:32.440 [2024-12-09 23:22:10.831036] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:32.700 [2024-12-09 23:22:10.924156] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:33.274 23:22:11 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:33.274 23:22:11 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # return 0 00:40:33.274 23:22:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:40:33.274 23:22:11 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:40:33.274 23:22:11 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:40:33.274 23:22:11 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:40:33.274 23:22:11 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:40:33.274 23:22:11 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:40:33.548 23:22:11 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:40:33.548 23:22:11 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:40:33.548 23:22:11 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:40:33.548 23:22:11 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:40:33.548 23:22:11 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:40:33.548 23:22:11 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:40:33.548 23:22:11 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:40:33.548 23:22:11 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:40:33.548 23:22:11 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:40:33.548 { 00:40:33.548 "name": "nvme0n1", 00:40:33.548 "aliases": [ 00:40:33.548 "5d10801b-e2e0-4d20-92ce-82f1efc58287" 00:40:33.548 ], 00:40:33.548 "product_name": "NVMe disk", 00:40:33.548 "block_size": 4096, 00:40:33.548 "num_blocks": 1310720, 00:40:33.548 "uuid": "5d10801b-e2e0-4d20-92ce-82f1efc58287", 00:40:33.548 "numa_id": -1, 00:40:33.548 "assigned_rate_limits": { 00:40:33.548 "rw_ios_per_sec": 0, 00:40:33.548 "rw_mbytes_per_sec": 0, 00:40:33.548 "r_mbytes_per_sec": 0, 00:40:33.548 "w_mbytes_per_sec": 0 00:40:33.548 }, 00:40:33.548 "claimed": true, 00:40:33.548 "claim_type": "read_many_write_one", 00:40:33.548 "zoned": false, 00:40:33.548 "supported_io_types": { 00:40:33.548 "read": true, 00:40:33.548 "write": true, 00:40:33.548 "unmap": true, 00:40:33.548 "flush": true, 00:40:33.548 "reset": true, 00:40:33.548 "nvme_admin": true, 00:40:33.548 "nvme_io": true, 00:40:33.548 "nvme_io_md": false, 00:40:33.548 "write_zeroes": true, 00:40:33.548 "zcopy": false, 00:40:33.548 "get_zone_info": false, 00:40:33.548 "zone_management": false, 00:40:33.548 "zone_append": false, 00:40:33.548 "compare": true, 00:40:33.548 "compare_and_write": false, 00:40:33.548 "abort": true, 00:40:33.548 "seek_hole": false, 00:40:33.548 "seek_data": false, 00:40:33.548 "copy": true, 00:40:33.548 "nvme_iov_md": false 00:40:33.548 }, 00:40:33.548 "driver_specific": { 00:40:33.548 "nvme": [ 00:40:33.548 { 00:40:33.548 "pci_address": "0000:00:11.0", 00:40:33.548 "trid": { 00:40:33.548 "trtype": "PCIe", 00:40:33.548 "traddr": "0000:00:11.0" 00:40:33.548 }, 00:40:33.548 "ctrlr_data": { 00:40:33.548 "cntlid": 0, 00:40:33.548 "vendor_id": "0x1b36", 00:40:33.548 "model_number": "QEMU NVMe Ctrl", 00:40:33.548 "serial_number": "12341", 00:40:33.548 "firmware_revision": "8.0.0", 00:40:33.548 "subnqn": "nqn.2019-08.org.qemu:12341", 00:40:33.548 "oacs": { 00:40:33.548 "security": 0, 00:40:33.548 "format": 1, 00:40:33.548 "firmware": 0, 00:40:33.548 "ns_manage": 1 00:40:33.548 }, 00:40:33.548 "multi_ctrlr": false, 00:40:33.548 "ana_reporting": false 00:40:33.548 }, 00:40:33.548 "vs": { 00:40:33.548 "nvme_version": "1.4" 00:40:33.548 }, 00:40:33.548 "ns_data": { 00:40:33.548 "id": 1, 00:40:33.548 "can_share": false 00:40:33.548 } 00:40:33.548 } 00:40:33.548 ], 00:40:33.548 "mp_policy": "active_passive" 00:40:33.548 } 00:40:33.548 } 00:40:33.548 ]' 00:40:33.548 23:22:11 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:40:33.809 23:22:12 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:40:33.809 23:22:12 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:40:33.809 23:22:12 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:40:33.809 23:22:12 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:40:33.809 23:22:12 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:40:33.809 23:22:12 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:40:33.809 23:22:12 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:40:33.809 23:22:12 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:40:33.809 23:22:12 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:40:33.809 23:22:12 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:40:33.809 23:22:12 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=e68f867e-5095-49c3-932b-2759c08571fc 00:40:33.809 23:22:12 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:40:33.809 23:22:12 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e68f867e-5095-49c3-932b-2759c08571fc 00:40:34.070 23:22:12 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:40:34.331 23:22:12 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=ff39f0bb-5a33-4fca-8826-5515983a29a0 00:40:34.331 23:22:12 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u ff39f0bb-5a33-4fca-8826-5515983a29a0 00:40:34.592 23:22:12 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=ad3ac884-2837-4690-8504-6e393d19d9ee 00:40:34.592 23:22:12 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:40:34.592 23:22:12 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 ad3ac884-2837-4690-8504-6e393d19d9ee 00:40:34.592 23:22:12 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:40:34.592 23:22:12 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:40:34.592 23:22:12 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=ad3ac884-2837-4690-8504-6e393d19d9ee 00:40:34.592 23:22:12 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:40:34.592 23:22:12 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size ad3ac884-2837-4690-8504-6e393d19d9ee 00:40:34.592 23:22:12 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=ad3ac884-2837-4690-8504-6e393d19d9ee 00:40:34.592 23:22:12 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:40:34.592 23:22:12 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:40:34.592 23:22:12 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:40:34.592 23:22:12 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ad3ac884-2837-4690-8504-6e393d19d9ee 00:40:34.592 23:22:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:40:34.592 { 00:40:34.592 "name": "ad3ac884-2837-4690-8504-6e393d19d9ee", 00:40:34.592 "aliases": [ 00:40:34.592 "lvs/nvme0n1p0" 00:40:34.592 ], 00:40:34.592 "product_name": "Logical Volume", 00:40:34.592 "block_size": 4096, 00:40:34.592 "num_blocks": 26476544, 00:40:34.592 "uuid": "ad3ac884-2837-4690-8504-6e393d19d9ee", 00:40:34.592 "assigned_rate_limits": { 00:40:34.592 "rw_ios_per_sec": 0, 00:40:34.592 "rw_mbytes_per_sec": 0, 00:40:34.592 "r_mbytes_per_sec": 0, 00:40:34.592 "w_mbytes_per_sec": 0 00:40:34.592 }, 00:40:34.592 "claimed": false, 00:40:34.592 "zoned": false, 00:40:34.592 "supported_io_types": { 00:40:34.592 "read": true, 00:40:34.592 "write": true, 00:40:34.592 "unmap": true, 00:40:34.592 "flush": false, 00:40:34.592 "reset": true, 00:40:34.592 "nvme_admin": false, 00:40:34.592 "nvme_io": false, 00:40:34.592 "nvme_io_md": false, 00:40:34.592 "write_zeroes": true, 00:40:34.592 "zcopy": false, 00:40:34.592 "get_zone_info": false, 00:40:34.593 "zone_management": false, 00:40:34.593 "zone_append": false, 00:40:34.593 "compare": false, 00:40:34.593 "compare_and_write": false, 00:40:34.593 "abort": false, 00:40:34.593 "seek_hole": true, 00:40:34.593 "seek_data": true, 00:40:34.593 "copy": false, 00:40:34.593 "nvme_iov_md": false 00:40:34.593 }, 00:40:34.593 "driver_specific": { 00:40:34.593 "lvol": { 00:40:34.593 "lvol_store_uuid": "ff39f0bb-5a33-4fca-8826-5515983a29a0", 00:40:34.593 "base_bdev": "nvme0n1", 00:40:34.593 "thin_provision": true, 00:40:34.593 "num_allocated_clusters": 0, 00:40:34.593 "snapshot": false, 00:40:34.593 "clone": false, 00:40:34.593 "esnap_clone": false 00:40:34.593 } 00:40:34.593 } 00:40:34.593 } 00:40:34.593 ]' 00:40:34.593 23:22:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:40:34.853 23:22:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:40:34.853 23:22:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:40:34.853 23:22:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:40:34.853 23:22:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:40:34.853 23:22:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:40:34.853 23:22:13 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:40:34.853 23:22:13 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:40:34.853 23:22:13 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:40:35.115 23:22:13 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:40:35.115 23:22:13 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:40:35.115 23:22:13 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size ad3ac884-2837-4690-8504-6e393d19d9ee 00:40:35.115 23:22:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=ad3ac884-2837-4690-8504-6e393d19d9ee 00:40:35.115 23:22:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:40:35.115 23:22:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:40:35.115 23:22:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:40:35.115 23:22:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ad3ac884-2837-4690-8504-6e393d19d9ee 00:40:35.115 23:22:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:40:35.115 { 00:40:35.115 "name": "ad3ac884-2837-4690-8504-6e393d19d9ee", 00:40:35.115 "aliases": [ 00:40:35.115 "lvs/nvme0n1p0" 00:40:35.115 ], 00:40:35.115 "product_name": "Logical Volume", 00:40:35.115 "block_size": 4096, 00:40:35.115 "num_blocks": 26476544, 00:40:35.115 "uuid": "ad3ac884-2837-4690-8504-6e393d19d9ee", 00:40:35.115 "assigned_rate_limits": { 00:40:35.115 "rw_ios_per_sec": 0, 00:40:35.115 "rw_mbytes_per_sec": 0, 00:40:35.115 "r_mbytes_per_sec": 0, 00:40:35.115 "w_mbytes_per_sec": 0 00:40:35.115 }, 00:40:35.115 "claimed": false, 00:40:35.115 "zoned": false, 00:40:35.115 "supported_io_types": { 00:40:35.115 "read": true, 00:40:35.115 "write": true, 00:40:35.115 "unmap": true, 00:40:35.115 "flush": false, 00:40:35.115 "reset": true, 00:40:35.115 "nvme_admin": false, 00:40:35.115 "nvme_io": false, 00:40:35.115 "nvme_io_md": false, 00:40:35.115 "write_zeroes": true, 00:40:35.115 "zcopy": false, 00:40:35.115 "get_zone_info": false, 00:40:35.115 "zone_management": false, 00:40:35.115 "zone_append": false, 00:40:35.115 "compare": false, 00:40:35.115 "compare_and_write": false, 00:40:35.115 "abort": false, 00:40:35.115 "seek_hole": true, 00:40:35.115 "seek_data": true, 00:40:35.115 "copy": false, 00:40:35.115 "nvme_iov_md": false 00:40:35.115 }, 00:40:35.115 "driver_specific": { 00:40:35.115 "lvol": { 00:40:35.115 "lvol_store_uuid": "ff39f0bb-5a33-4fca-8826-5515983a29a0", 00:40:35.115 "base_bdev": "nvme0n1", 00:40:35.115 "thin_provision": true, 00:40:35.115 "num_allocated_clusters": 0, 00:40:35.115 "snapshot": false, 00:40:35.115 "clone": false, 00:40:35.115 "esnap_clone": false 00:40:35.115 } 00:40:35.115 } 00:40:35.115 } 00:40:35.115 ]' 00:40:35.115 23:22:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:40:35.115 23:22:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:40:35.115 23:22:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:40:35.376 23:22:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:40:35.376 23:22:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:40:35.376 23:22:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:40:35.376 23:22:13 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:40:35.376 23:22:13 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:40:35.376 23:22:13 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:40:35.376 23:22:13 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size ad3ac884-2837-4690-8504-6e393d19d9ee 00:40:35.376 23:22:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=ad3ac884-2837-4690-8504-6e393d19d9ee 00:40:35.376 23:22:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:40:35.376 23:22:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:40:35.376 23:22:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:40:35.376 23:22:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ad3ac884-2837-4690-8504-6e393d19d9ee 00:40:35.638 23:22:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:40:35.638 { 00:40:35.638 "name": "ad3ac884-2837-4690-8504-6e393d19d9ee", 00:40:35.638 "aliases": [ 00:40:35.638 "lvs/nvme0n1p0" 00:40:35.638 ], 00:40:35.638 "product_name": "Logical Volume", 00:40:35.638 "block_size": 4096, 00:40:35.638 "num_blocks": 26476544, 00:40:35.638 "uuid": "ad3ac884-2837-4690-8504-6e393d19d9ee", 00:40:35.638 "assigned_rate_limits": { 00:40:35.638 "rw_ios_per_sec": 0, 00:40:35.638 "rw_mbytes_per_sec": 0, 00:40:35.638 "r_mbytes_per_sec": 0, 00:40:35.638 "w_mbytes_per_sec": 0 00:40:35.638 }, 00:40:35.638 "claimed": false, 00:40:35.638 "zoned": false, 00:40:35.638 "supported_io_types": { 00:40:35.638 "read": true, 00:40:35.638 "write": true, 00:40:35.638 "unmap": true, 00:40:35.638 "flush": false, 00:40:35.638 "reset": true, 00:40:35.638 "nvme_admin": false, 00:40:35.638 "nvme_io": false, 00:40:35.638 "nvme_io_md": false, 00:40:35.638 "write_zeroes": true, 00:40:35.638 "zcopy": false, 00:40:35.638 "get_zone_info": false, 00:40:35.638 "zone_management": false, 00:40:35.638 "zone_append": false, 00:40:35.638 "compare": false, 00:40:35.638 "compare_and_write": false, 00:40:35.638 "abort": false, 00:40:35.638 "seek_hole": true, 00:40:35.638 "seek_data": true, 00:40:35.638 "copy": false, 00:40:35.638 "nvme_iov_md": false 00:40:35.638 }, 00:40:35.638 "driver_specific": { 00:40:35.638 "lvol": { 00:40:35.638 "lvol_store_uuid": "ff39f0bb-5a33-4fca-8826-5515983a29a0", 00:40:35.638 "base_bdev": "nvme0n1", 00:40:35.638 "thin_provision": true, 00:40:35.638 "num_allocated_clusters": 0, 00:40:35.638 "snapshot": false, 00:40:35.638 "clone": false, 00:40:35.638 "esnap_clone": false 00:40:35.638 } 00:40:35.638 } 00:40:35.638 } 00:40:35.638 ]' 00:40:35.638 23:22:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:40:35.638 23:22:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:40:35.638 23:22:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:40:35.638 23:22:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:40:35.638 23:22:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:40:35.638 23:22:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:40:35.638 23:22:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:40:35.638 23:22:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d ad3ac884-2837-4690-8504-6e393d19d9ee --l2p_dram_limit 10' 00:40:35.638 23:22:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:40:35.638 23:22:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:40:35.638 23:22:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:40:35.638 23:22:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d ad3ac884-2837-4690-8504-6e393d19d9ee --l2p_dram_limit 10 -c nvc0n1p0 00:40:35.900 [2024-12-09 23:22:14.242023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:35.900 [2024-12-09 23:22:14.242181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:40:35.900 [2024-12-09 23:22:14.242210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:40:35.900 [2024-12-09 23:22:14.242234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:35.900 [2024-12-09 23:22:14.242312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:35.900 [2024-12-09 23:22:14.242324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:40:35.900 [2024-12-09 23:22:14.242335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:40:35.900 [2024-12-09 23:22:14.242343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:35.900 [2024-12-09 23:22:14.242373] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:40:35.900 [2024-12-09 23:22:14.243170] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:40:35.900 [2024-12-09 23:22:14.243202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:35.900 [2024-12-09 23:22:14.243211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:40:35.900 [2024-12-09 23:22:14.243234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.838 ms 00:40:35.900 [2024-12-09 23:22:14.243243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:35.900 [2024-12-09 23:22:14.243316] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 290e4479-0ec0-4dd4-8c2a-b1b4f77d8eec 00:40:35.900 [2024-12-09 23:22:14.244682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:35.900 [2024-12-09 23:22:14.244721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:40:35.900 [2024-12-09 23:22:14.244732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:40:35.900 [2024-12-09 23:22:14.244743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:35.900 [2024-12-09 23:22:14.251861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:35.900 [2024-12-09 23:22:14.251898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:40:35.900 [2024-12-09 23:22:14.251908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.061 ms 00:40:35.900 [2024-12-09 23:22:14.251917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:35.900 [2024-12-09 23:22:14.252003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:35.900 [2024-12-09 23:22:14.252015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:40:35.900 [2024-12-09 23:22:14.252024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:40:35.900 [2024-12-09 23:22:14.252037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:35.900 [2024-12-09 23:22:14.252099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:35.900 [2024-12-09 23:22:14.252112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:40:35.900 [2024-12-09 23:22:14.252123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:40:35.900 [2024-12-09 23:22:14.252132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:35.900 [2024-12-09 23:22:14.252153] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:40:35.900 [2024-12-09 23:22:14.256134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:35.900 [2024-12-09 23:22:14.256163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:40:35.900 [2024-12-09 23:22:14.256175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.983 ms 00:40:35.900 [2024-12-09 23:22:14.256183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:35.900 [2024-12-09 23:22:14.256231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:35.900 [2024-12-09 23:22:14.256240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:40:35.900 [2024-12-09 23:22:14.256250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:40:35.900 [2024-12-09 23:22:14.256259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:35.900 [2024-12-09 23:22:14.256303] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:40:35.900 [2024-12-09 23:22:14.256448] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:40:35.900 [2024-12-09 23:22:14.256652] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:40:35.900 [2024-12-09 23:22:14.256671] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:40:35.900 [2024-12-09 23:22:14.256683] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:40:35.900 [2024-12-09 23:22:14.256693] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:40:35.900 [2024-12-09 23:22:14.256703] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:40:35.900 [2024-12-09 23:22:14.256712] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:40:35.900 [2024-12-09 23:22:14.256727] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:40:35.900 [2024-12-09 23:22:14.256734] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:40:35.900 [2024-12-09 23:22:14.256745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:35.900 [2024-12-09 23:22:14.256760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:40:35.900 [2024-12-09 23:22:14.256769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.443 ms 00:40:35.900 [2024-12-09 23:22:14.256777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:35.900 [2024-12-09 23:22:14.256870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:35.900 [2024-12-09 23:22:14.256879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:40:35.900 [2024-12-09 23:22:14.256890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:40:35.900 [2024-12-09 23:22:14.256897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:35.900 [2024-12-09 23:22:14.256998] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:40:35.901 [2024-12-09 23:22:14.257009] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:40:35.901 [2024-12-09 23:22:14.257019] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:40:35.901 [2024-12-09 23:22:14.257027] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:35.901 [2024-12-09 23:22:14.257037] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:40:35.901 [2024-12-09 23:22:14.257044] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:40:35.901 [2024-12-09 23:22:14.257053] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:40:35.901 [2024-12-09 23:22:14.257059] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:40:35.901 [2024-12-09 23:22:14.257068] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:40:35.901 [2024-12-09 23:22:14.257076] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:40:35.901 [2024-12-09 23:22:14.257085] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:40:35.901 [2024-12-09 23:22:14.257092] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:40:35.901 [2024-12-09 23:22:14.257100] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:40:35.901 [2024-12-09 23:22:14.257107] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:40:35.901 [2024-12-09 23:22:14.257116] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:40:35.901 [2024-12-09 23:22:14.257122] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:35.901 [2024-12-09 23:22:14.257132] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:40:35.901 [2024-12-09 23:22:14.257139] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:40:35.901 [2024-12-09 23:22:14.257147] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:35.901 [2024-12-09 23:22:14.257155] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:40:35.901 [2024-12-09 23:22:14.257164] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:40:35.901 [2024-12-09 23:22:14.257171] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:40:35.901 [2024-12-09 23:22:14.257179] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:40:35.901 [2024-12-09 23:22:14.257186] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:40:35.901 [2024-12-09 23:22:14.257195] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:40:35.901 [2024-12-09 23:22:14.257201] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:40:35.901 [2024-12-09 23:22:14.257209] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:40:35.901 [2024-12-09 23:22:14.257228] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:40:35.901 [2024-12-09 23:22:14.257237] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:40:35.901 [2024-12-09 23:22:14.257243] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:40:35.901 [2024-12-09 23:22:14.257254] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:40:35.901 [2024-12-09 23:22:14.257261] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:40:35.901 [2024-12-09 23:22:14.257272] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:40:35.901 [2024-12-09 23:22:14.257279] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:40:35.901 [2024-12-09 23:22:14.257288] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:40:35.901 [2024-12-09 23:22:14.257296] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:40:35.901 [2024-12-09 23:22:14.257306] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:40:35.901 [2024-12-09 23:22:14.257313] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:40:35.901 [2024-12-09 23:22:14.257321] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:40:35.901 [2024-12-09 23:22:14.257328] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:35.901 [2024-12-09 23:22:14.257336] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:40:35.901 [2024-12-09 23:22:14.257344] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:40:35.901 [2024-12-09 23:22:14.257352] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:35.901 [2024-12-09 23:22:14.257359] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:40:35.901 [2024-12-09 23:22:14.257368] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:40:35.901 [2024-12-09 23:22:14.257376] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:40:35.901 [2024-12-09 23:22:14.257396] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:35.901 [2024-12-09 23:22:14.257404] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:40:35.901 [2024-12-09 23:22:14.257414] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:40:35.901 [2024-12-09 23:22:14.257421] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:40:35.901 [2024-12-09 23:22:14.257429] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:40:35.901 [2024-12-09 23:22:14.257436] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:40:35.901 [2024-12-09 23:22:14.257445] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:40:35.901 [2024-12-09 23:22:14.257453] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:40:35.901 [2024-12-09 23:22:14.257466] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:40:35.901 [2024-12-09 23:22:14.257475] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:40:35.901 [2024-12-09 23:22:14.257485] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:40:35.901 [2024-12-09 23:22:14.257492] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:40:35.901 [2024-12-09 23:22:14.257501] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:40:35.901 [2024-12-09 23:22:14.257507] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:40:35.901 [2024-12-09 23:22:14.257517] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:40:35.901 [2024-12-09 23:22:14.257525] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:40:35.901 [2024-12-09 23:22:14.257536] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:40:35.901 [2024-12-09 23:22:14.257543] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:40:35.901 [2024-12-09 23:22:14.257555] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:40:35.901 [2024-12-09 23:22:14.257562] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:40:35.901 [2024-12-09 23:22:14.257572] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:40:35.901 [2024-12-09 23:22:14.257580] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:40:35.901 [2024-12-09 23:22:14.257589] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:40:35.901 [2024-12-09 23:22:14.257597] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:40:35.901 [2024-12-09 23:22:14.257607] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:40:35.901 [2024-12-09 23:22:14.257616] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:40:35.901 [2024-12-09 23:22:14.257626] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:40:35.901 [2024-12-09 23:22:14.257633] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:40:35.901 [2024-12-09 23:22:14.257642] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:40:35.901 [2024-12-09 23:22:14.257649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:35.901 [2024-12-09 23:22:14.257659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:40:35.901 [2024-12-09 23:22:14.257666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.722 ms 00:40:35.901 [2024-12-09 23:22:14.257675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:35.901 [2024-12-09 23:22:14.257713] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:40:35.901 [2024-12-09 23:22:14.257725] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:40:39.203 [2024-12-09 23:22:16.996176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:39.203 [2024-12-09 23:22:16.996449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:40:39.203 [2024-12-09 23:22:16.996472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2738.452 ms 00:40:39.203 [2024-12-09 23:22:16.996484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:39.203 [2024-12-09 23:22:17.024967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:39.203 [2024-12-09 23:22:17.025012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:40:39.203 [2024-12-09 23:22:17.025025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.273 ms 00:40:39.203 [2024-12-09 23:22:17.025034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:39.203 [2024-12-09 23:22:17.025157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:39.203 [2024-12-09 23:22:17.025171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:40:39.203 [2024-12-09 23:22:17.025179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:40:39.203 [2024-12-09 23:22:17.025194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:39.203 [2024-12-09 23:22:17.057767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:39.203 [2024-12-09 23:22:17.057803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:40:39.203 [2024-12-09 23:22:17.057813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.522 ms 00:40:39.203 [2024-12-09 23:22:17.057823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:39.203 [2024-12-09 23:22:17.057850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:39.203 [2024-12-09 23:22:17.057864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:40:39.203 [2024-12-09 23:22:17.057872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:40:39.203 [2024-12-09 23:22:17.057888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:39.203 [2024-12-09 23:22:17.058344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:39.203 [2024-12-09 23:22:17.058366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:40:39.203 [2024-12-09 23:22:17.058376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.413 ms 00:40:39.203 [2024-12-09 23:22:17.058387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:39.203 [2024-12-09 23:22:17.058490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:39.203 [2024-12-09 23:22:17.058504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:40:39.203 [2024-12-09 23:22:17.058516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:40:39.203 [2024-12-09 23:22:17.058528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:39.203 [2024-12-09 23:22:17.074031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:39.203 [2024-12-09 23:22:17.074065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:40:39.203 [2024-12-09 23:22:17.074075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.485 ms 00:40:39.203 [2024-12-09 23:22:17.074085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:39.203 [2024-12-09 23:22:17.100315] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:40:39.203 [2024-12-09 23:22:17.103691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:39.203 [2024-12-09 23:22:17.103723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:40:39.203 [2024-12-09 23:22:17.103738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.516 ms 00:40:39.203 [2024-12-09 23:22:17.103746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:39.203 [2024-12-09 23:22:17.174252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:39.203 [2024-12-09 23:22:17.174418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:40:39.203 [2024-12-09 23:22:17.174441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 70.468 ms 00:40:39.203 [2024-12-09 23:22:17.174450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:39.203 [2024-12-09 23:22:17.174630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:39.203 [2024-12-09 23:22:17.174646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:40:39.203 [2024-12-09 23:22:17.174659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.145 ms 00:40:39.203 [2024-12-09 23:22:17.174668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:39.203 [2024-12-09 23:22:17.197370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:39.203 [2024-12-09 23:22:17.197566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:40:39.203 [2024-12-09 23:22:17.197587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.639 ms 00:40:39.203 [2024-12-09 23:22:17.197596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:39.203 [2024-12-09 23:22:17.220284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:39.203 [2024-12-09 23:22:17.220399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:40:39.203 [2024-12-09 23:22:17.220418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.617 ms 00:40:39.203 [2024-12-09 23:22:17.220426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:39.203 [2024-12-09 23:22:17.220997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:39.203 [2024-12-09 23:22:17.221014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:40:39.203 [2024-12-09 23:22:17.221025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.540 ms 00:40:39.203 [2024-12-09 23:22:17.221036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:39.203 [2024-12-09 23:22:17.291779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:39.203 [2024-12-09 23:22:17.291900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:40:39.203 [2024-12-09 23:22:17.291923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 70.708 ms 00:40:39.203 [2024-12-09 23:22:17.291931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:39.203 [2024-12-09 23:22:17.316738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:39.203 [2024-12-09 23:22:17.316768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:40:39.203 [2024-12-09 23:22:17.316781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.740 ms 00:40:39.203 [2024-12-09 23:22:17.316790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:39.203 [2024-12-09 23:22:17.339056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:39.203 [2024-12-09 23:22:17.339172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:40:39.203 [2024-12-09 23:22:17.339191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.229 ms 00:40:39.203 [2024-12-09 23:22:17.339199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:39.203 [2024-12-09 23:22:17.363169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:39.203 [2024-12-09 23:22:17.363199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:40:39.203 [2024-12-09 23:22:17.363211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.924 ms 00:40:39.203 [2024-12-09 23:22:17.363230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:39.203 [2024-12-09 23:22:17.363269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:39.203 [2024-12-09 23:22:17.363279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:40:39.203 [2024-12-09 23:22:17.363293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:40:39.203 [2024-12-09 23:22:17.363300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:39.203 [2024-12-09 23:22:17.363377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:39.204 [2024-12-09 23:22:17.363390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:40:39.204 [2024-12-09 23:22:17.363401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:40:39.204 [2024-12-09 23:22:17.363408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:39.204 [2024-12-09 23:22:17.364397] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3121.922 ms, result 0 00:40:39.204 { 00:40:39.204 "name": "ftl0", 00:40:39.204 "uuid": "290e4479-0ec0-4dd4-8c2a-b1b4f77d8eec" 00:40:39.204 } 00:40:39.204 23:22:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:40:39.204 23:22:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:40:39.204 23:22:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:40:39.204 23:22:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:40:39.204 23:22:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:40:39.465 /dev/nbd0 00:40:39.465 23:22:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:40:39.465 23:22:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:40:39.465 23:22:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # local i 00:40:39.465 23:22:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:40:39.465 23:22:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:40:39.465 23:22:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:40:39.465 23:22:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@877 -- # break 00:40:39.465 23:22:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:40:39.465 23:22:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:40:39.465 23:22:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:40:39.465 1+0 records in 00:40:39.465 1+0 records out 00:40:39.465 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000278197 s, 14.7 MB/s 00:40:39.465 23:22:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:40:39.465 23:22:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # size=4096 00:40:39.465 23:22:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:40:39.465 23:22:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:40:39.465 23:22:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@893 -- # return 0 00:40:39.465 23:22:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:40:39.465 [2024-12-09 23:22:17.800809] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:40:39.465 [2024-12-09 23:22:17.800922] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80242 ] 00:40:39.726 [2024-12-09 23:22:17.954651] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:39.726 [2024-12-09 23:22:18.032325] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:41.105  [2024-12-09T23:22:20.501Z] Copying: 257/1024 [MB] (257 MBps) [2024-12-09T23:22:21.436Z] Copying: 515/1024 [MB] (257 MBps) [2024-12-09T23:22:22.371Z] Copying: 770/1024 [MB] (255 MBps) [2024-12-09T23:22:22.371Z] Copying: 1023/1024 [MB] (253 MBps) [2024-12-09T23:22:22.937Z] Copying: 1024/1024 [MB] (average 255 MBps) 00:40:44.475 00:40:44.475 23:22:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:40:47.005 23:22:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:40:47.005 [2024-12-09 23:22:24.985318] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:40:47.005 [2024-12-09 23:22:24.985441] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80318 ] 00:40:47.005 [2024-12-09 23:22:25.146752] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:47.005 [2024-12-09 23:22:25.243028] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:48.378  [2024-12-09T23:22:27.775Z] Copying: 29/1024 [MB] (29 MBps) [2024-12-09T23:22:28.708Z] Copying: 58/1024 [MB] (28 MBps) [2024-12-09T23:22:29.643Z] Copying: 86/1024 [MB] (27 MBps) [2024-12-09T23:22:30.577Z] Copying: 113/1024 [MB] (27 MBps) [2024-12-09T23:22:31.552Z] Copying: 140/1024 [MB] (26 MBps) [2024-12-09T23:22:32.509Z] Copying: 165/1024 [MB] (25 MBps) [2024-12-09T23:22:33.887Z] Copying: 194/1024 [MB] (28 MBps) [2024-12-09T23:22:34.821Z] Copying: 223/1024 [MB] (29 MBps) [2024-12-09T23:22:35.755Z] Copying: 252/1024 [MB] (29 MBps) [2024-12-09T23:22:36.689Z] Copying: 283/1024 [MB] (30 MBps) [2024-12-09T23:22:37.624Z] Copying: 312/1024 [MB] (29 MBps) [2024-12-09T23:22:38.558Z] Copying: 342/1024 [MB] (30 MBps) [2024-12-09T23:22:39.491Z] Copying: 373/1024 [MB] (30 MBps) [2024-12-09T23:22:40.864Z] Copying: 403/1024 [MB] (30 MBps) [2024-12-09T23:22:41.798Z] Copying: 438/1024 [MB] (34 MBps) [2024-12-09T23:22:42.733Z] Copying: 469/1024 [MB] (31 MBps) [2024-12-09T23:22:43.668Z] Copying: 500/1024 [MB] (31 MBps) [2024-12-09T23:22:44.602Z] Copying: 532/1024 [MB] (31 MBps) [2024-12-09T23:22:45.537Z] Copying: 565/1024 [MB] (33 MBps) [2024-12-09T23:22:46.471Z] Copying: 595/1024 [MB] (29 MBps) [2024-12-09T23:22:47.844Z] Copying: 627/1024 [MB] (31 MBps) [2024-12-09T23:22:48.783Z] Copying: 656/1024 [MB] (29 MBps) [2024-12-09T23:22:49.716Z] Copying: 686/1024 [MB] (29 MBps) [2024-12-09T23:22:50.672Z] Copying: 717/1024 [MB] (30 MBps) [2024-12-09T23:22:51.606Z] Copying: 753/1024 [MB] (36 MBps) [2024-12-09T23:22:52.539Z] Copying: 783/1024 [MB] (30 MBps) [2024-12-09T23:22:53.473Z] Copying: 813/1024 [MB] (29 MBps) [2024-12-09T23:22:54.847Z] Copying: 845/1024 [MB] (31 MBps) [2024-12-09T23:22:55.781Z] Copying: 875/1024 [MB] (30 MBps) [2024-12-09T23:22:56.714Z] Copying: 910/1024 [MB] (34 MBps) [2024-12-09T23:22:57.648Z] Copying: 940/1024 [MB] (30 MBps) [2024-12-09T23:22:58.586Z] Copying: 970/1024 [MB] (29 MBps) [2024-12-09T23:22:59.528Z] Copying: 998/1024 [MB] (27 MBps) [2024-12-09T23:23:00.100Z] Copying: 1024/1024 [MB] (average 30 MBps) 00:41:21.638 00:41:21.638 23:22:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:41:21.638 23:22:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:41:21.898 23:23:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:41:21.898 [2024-12-09 23:23:00.292150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:21.898 [2024-12-09 23:23:00.292298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:41:21.898 [2024-12-09 23:23:00.292316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:41:21.898 [2024-12-09 23:23:00.292325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:21.898 [2024-12-09 23:23:00.292351] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:41:21.898 [2024-12-09 23:23:00.294455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:21.898 [2024-12-09 23:23:00.294480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:41:21.898 [2024-12-09 23:23:00.294490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.088 ms 00:41:21.898 [2024-12-09 23:23:00.294496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:21.898 [2024-12-09 23:23:00.296915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:21.898 [2024-12-09 23:23:00.296950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:41:21.898 [2024-12-09 23:23:00.296960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.395 ms 00:41:21.898 [2024-12-09 23:23:00.296967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:21.898 [2024-12-09 23:23:00.312146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:21.898 [2024-12-09 23:23:00.312280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:41:21.898 [2024-12-09 23:23:00.312299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.158 ms 00:41:21.898 [2024-12-09 23:23:00.312306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:21.898 [2024-12-09 23:23:00.317017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:21.898 [2024-12-09 23:23:00.317042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:41:21.898 [2024-12-09 23:23:00.317053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.680 ms 00:41:21.898 [2024-12-09 23:23:00.317060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:21.898 [2024-12-09 23:23:00.335495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:21.898 [2024-12-09 23:23:00.335597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:41:21.898 [2024-12-09 23:23:00.335613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.384 ms 00:41:21.899 [2024-12-09 23:23:00.335619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:21.899 [2024-12-09 23:23:00.347824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:21.899 [2024-12-09 23:23:00.347853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:41:21.899 [2024-12-09 23:23:00.347866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.176 ms 00:41:21.899 [2024-12-09 23:23:00.347872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:21.899 [2024-12-09 23:23:00.347987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:21.899 [2024-12-09 23:23:00.347996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:41:21.899 [2024-12-09 23:23:00.348004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:41:21.899 [2024-12-09 23:23:00.348010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:22.160 [2024-12-09 23:23:00.365599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:22.160 [2024-12-09 23:23:00.365623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:41:22.160 [2024-12-09 23:23:00.365633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.573 ms 00:41:22.160 [2024-12-09 23:23:00.365638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:22.160 [2024-12-09 23:23:00.383212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:22.160 [2024-12-09 23:23:00.383246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:41:22.160 [2024-12-09 23:23:00.383256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.544 ms 00:41:22.160 [2024-12-09 23:23:00.383261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:22.160 [2024-12-09 23:23:00.400153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:22.160 [2024-12-09 23:23:00.400178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:41:22.160 [2024-12-09 23:23:00.400188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.861 ms 00:41:22.160 [2024-12-09 23:23:00.400194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:22.160 [2024-12-09 23:23:00.417101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:22.160 [2024-12-09 23:23:00.417126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:41:22.160 [2024-12-09 23:23:00.417135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.839 ms 00:41:22.160 [2024-12-09 23:23:00.417141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:22.160 [2024-12-09 23:23:00.417168] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:41:22.160 [2024-12-09 23:23:00.417179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:41:22.160 [2024-12-09 23:23:00.417188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:41:22.160 [2024-12-09 23:23:00.417194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:41:22.161 [2024-12-09 23:23:00.417201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:41:22.161 [2024-12-09 23:23:00.417207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:41:22.161 [2024-12-09 23:23:00.417214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:41:22.161 [2024-12-09 23:23:00.417227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:41:22.161 [2024-12-09 23:23:00.417236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:41:22.161 [2024-12-09 23:23:00.417241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:41:22.161 [2024-12-09 23:23:00.417248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:41:22.161 [2024-12-09 23:23:00.417254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:41:22.161 [2024-12-09 23:23:00.417261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:41:22.161 [2024-12-09 23:23:00.417267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:41:22.161 [2024-12-09 23:23:00.417274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:41:22.161 [2024-12-09 23:23:00.417280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:41:22.161 [2024-12-09 23:23:00.417287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:41:22.161 [2024-12-09 23:23:00.417293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:41:22.161 [2024-12-09 23:23:00.417300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:41:22.161 [2024-12-09 23:23:00.417305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:41:22.161 [2024-12-09 23:23:00.417312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:41:22.161 [2024-12-09 23:23:00.417318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:41:22.161 [2024-12-09 23:23:00.417326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:41:22.161 [2024-12-09 23:23:00.417332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:41:22.161 [2024-12-09 23:23:00.417340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:41:22.161 [2024-12-09 23:23:00.417346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:41:22.161 [2024-12-09 23:23:00.417353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:41:22.161 [2024-12-09 23:23:00.417359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:41:22.161 [2024-12-09 23:23:00.417367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:41:22.161 [2024-12-09 23:23:00.417373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:41:22.161 [2024-12-09 23:23:00.417387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:41:22.161 [2024-12-09 23:23:00.417393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:41:22.161 [2024-12-09 23:23:00.417400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:41:22.161 [2024-12-09 23:23:00.417406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:41:22.161 [2024-12-09 23:23:00.417413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:41:22.161 [2024-12-09 23:23:00.417418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:41:22.161 [2024-12-09 23:23:00.417426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:41:22.161 [2024-12-09 23:23:00.417431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:41:22.161 [2024-12-09 23:23:00.417438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:41:22.161 [2024-12-09 23:23:00.417444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:41:22.161 [2024-12-09 23:23:00.417452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:41:22.161 [2024-12-09 23:23:00.417458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:41:22.161 [2024-12-09 23:23:00.417465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:41:22.161 [2024-12-09 23:23:00.417471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:41:22.161 [2024-12-09 23:23:00.417478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:41:22.161 [2024-12-09 23:23:00.417484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:41:22.161 [2024-12-09 23:23:00.417490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:41:22.161 [2024-12-09 23:23:00.417502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:41:22.161 [2024-12-09 23:23:00.417509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:41:22.161 [2024-12-09 23:23:00.417515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:41:22.161 [2024-12-09 23:23:00.417521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:41:22.161 [2024-12-09 23:23:00.417527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:41:22.161 [2024-12-09 23:23:00.417534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:41:22.161 [2024-12-09 23:23:00.417539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:41:22.161 [2024-12-09 23:23:00.417546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:41:22.161 [2024-12-09 23:23:00.417552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:41:22.161 [2024-12-09 23:23:00.417560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:41:22.161 [2024-12-09 23:23:00.417566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:41:22.161 [2024-12-09 23:23:00.417572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:41:22.161 [2024-12-09 23:23:00.417579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:41:22.161 [2024-12-09 23:23:00.417586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:41:22.161 [2024-12-09 23:23:00.417592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:41:22.161 [2024-12-09 23:23:00.417599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:41:22.161 [2024-12-09 23:23:00.417605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:41:22.161 [2024-12-09 23:23:00.417612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:41:22.161 [2024-12-09 23:23:00.417618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:41:22.161 [2024-12-09 23:23:00.417625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:41:22.161 [2024-12-09 23:23:00.417630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:41:22.161 [2024-12-09 23:23:00.417637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:41:22.161 [2024-12-09 23:23:00.417643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:41:22.161 [2024-12-09 23:23:00.417649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:41:22.161 [2024-12-09 23:23:00.417655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:41:22.161 [2024-12-09 23:23:00.417665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:41:22.161 [2024-12-09 23:23:00.417670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:41:22.161 [2024-12-09 23:23:00.417677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:41:22.161 [2024-12-09 23:23:00.417682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:41:22.162 [2024-12-09 23:23:00.417689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:41:22.162 [2024-12-09 23:23:00.417695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:41:22.162 [2024-12-09 23:23:00.417702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:41:22.162 [2024-12-09 23:23:00.417707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:41:22.162 [2024-12-09 23:23:00.417714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:41:22.162 [2024-12-09 23:23:00.417720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:41:22.162 [2024-12-09 23:23:00.417728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:41:22.162 [2024-12-09 23:23:00.417733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:41:22.162 [2024-12-09 23:23:00.417740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:41:22.162 [2024-12-09 23:23:00.417746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:41:22.162 [2024-12-09 23:23:00.417753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:41:22.162 [2024-12-09 23:23:00.417759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:41:22.162 [2024-12-09 23:23:00.417766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:41:22.162 [2024-12-09 23:23:00.417772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:41:22.162 [2024-12-09 23:23:00.417778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:41:22.162 [2024-12-09 23:23:00.417784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:41:22.162 [2024-12-09 23:23:00.417791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:41:22.162 [2024-12-09 23:23:00.417797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:41:22.162 [2024-12-09 23:23:00.417805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:41:22.162 [2024-12-09 23:23:00.417810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:41:22.162 [2024-12-09 23:23:00.417817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:41:22.162 [2024-12-09 23:23:00.417823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:41:22.162 [2024-12-09 23:23:00.417830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:41:22.162 [2024-12-09 23:23:00.417835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:41:22.162 [2024-12-09 23:23:00.417843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:41:22.162 [2024-12-09 23:23:00.417854] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:41:22.162 [2024-12-09 23:23:00.417862] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 290e4479-0ec0-4dd4-8c2a-b1b4f77d8eec 00:41:22.162 [2024-12-09 23:23:00.417868] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:41:22.162 [2024-12-09 23:23:00.417888] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:41:22.162 [2024-12-09 23:23:00.417896] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:41:22.162 [2024-12-09 23:23:00.417902] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:41:22.162 [2024-12-09 23:23:00.417908] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:41:22.162 [2024-12-09 23:23:00.417915] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:41:22.162 [2024-12-09 23:23:00.417920] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:41:22.162 [2024-12-09 23:23:00.417927] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:41:22.162 [2024-12-09 23:23:00.417932] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:41:22.162 [2024-12-09 23:23:00.417938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:22.162 [2024-12-09 23:23:00.417944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:41:22.162 [2024-12-09 23:23:00.417951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.771 ms 00:41:22.162 [2024-12-09 23:23:00.417956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:22.162 [2024-12-09 23:23:00.427693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:22.162 [2024-12-09 23:23:00.427717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:41:22.162 [2024-12-09 23:23:00.427726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.711 ms 00:41:22.162 [2024-12-09 23:23:00.427732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:22.162 [2024-12-09 23:23:00.428004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:22.162 [2024-12-09 23:23:00.428011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:41:22.162 [2024-12-09 23:23:00.428018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.251 ms 00:41:22.162 [2024-12-09 23:23:00.428024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:22.162 [2024-12-09 23:23:00.461130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:22.162 [2024-12-09 23:23:00.461159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:41:22.162 [2024-12-09 23:23:00.461170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:22.162 [2024-12-09 23:23:00.461177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:22.162 [2024-12-09 23:23:00.461235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:22.162 [2024-12-09 23:23:00.461242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:41:22.162 [2024-12-09 23:23:00.461249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:22.162 [2024-12-09 23:23:00.461255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:22.162 [2024-12-09 23:23:00.461309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:22.162 [2024-12-09 23:23:00.461319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:41:22.162 [2024-12-09 23:23:00.461327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:22.162 [2024-12-09 23:23:00.461333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:22.162 [2024-12-09 23:23:00.461349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:22.162 [2024-12-09 23:23:00.461355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:41:22.162 [2024-12-09 23:23:00.461362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:22.162 [2024-12-09 23:23:00.461367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:22.162 [2024-12-09 23:23:00.520917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:22.162 [2024-12-09 23:23:00.520953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:41:22.162 [2024-12-09 23:23:00.520962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:22.162 [2024-12-09 23:23:00.520968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:22.162 [2024-12-09 23:23:00.568947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:22.162 [2024-12-09 23:23:00.568980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:41:22.162 [2024-12-09 23:23:00.568990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:22.162 [2024-12-09 23:23:00.568996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:22.162 [2024-12-09 23:23:00.569082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:22.162 [2024-12-09 23:23:00.569090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:41:22.162 [2024-12-09 23:23:00.569100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:22.162 [2024-12-09 23:23:00.569106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:22.162 [2024-12-09 23:23:00.569145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:22.162 [2024-12-09 23:23:00.569153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:41:22.162 [2024-12-09 23:23:00.569160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:22.162 [2024-12-09 23:23:00.569166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:22.162 [2024-12-09 23:23:00.569257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:22.162 [2024-12-09 23:23:00.569265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:41:22.163 [2024-12-09 23:23:00.569273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:22.163 [2024-12-09 23:23:00.569280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:22.163 [2024-12-09 23:23:00.569307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:22.163 [2024-12-09 23:23:00.569314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:41:22.163 [2024-12-09 23:23:00.569321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:22.163 [2024-12-09 23:23:00.569327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:22.163 [2024-12-09 23:23:00.569358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:22.163 [2024-12-09 23:23:00.569364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:41:22.163 [2024-12-09 23:23:00.569371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:22.163 [2024-12-09 23:23:00.569386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:22.163 [2024-12-09 23:23:00.569423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:22.163 [2024-12-09 23:23:00.569430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:41:22.163 [2024-12-09 23:23:00.569437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:22.163 [2024-12-09 23:23:00.569444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:22.163 [2024-12-09 23:23:00.569547] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 277.369 ms, result 0 00:41:22.163 true 00:41:22.163 23:23:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 80109 00:41:22.163 23:23:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid80109 00:41:22.163 23:23:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:41:22.422 [2024-12-09 23:23:00.660603] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:41:22.422 [2024-12-09 23:23:00.660892] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80695 ] 00:41:22.422 [2024-12-09 23:23:00.820423] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:22.682 [2024-12-09 23:23:00.905404] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:23.620  [2024-12-09T23:23:03.462Z] Copying: 261/1024 [MB] (261 MBps) [2024-12-09T23:23:04.393Z] Copying: 523/1024 [MB] (262 MBps) [2024-12-09T23:23:05.335Z] Copying: 784/1024 [MB] (261 MBps) [2024-12-09T23:23:05.592Z] Copying: 1024/1024 [MB] (average 260 MBps) 00:41:27.130 00:41:27.388 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 80109 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:41:27.388 23:23:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:41:27.388 [2024-12-09 23:23:05.659239] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:41:27.388 [2024-12-09 23:23:05.659377] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80749 ] 00:41:27.388 [2024-12-09 23:23:05.815617] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:27.646 [2024-12-09 23:23:05.896032] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:27.905 [2024-12-09 23:23:06.107722] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:41:27.905 [2024-12-09 23:23:06.107776] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:41:27.905 [2024-12-09 23:23:06.170377] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:41:27.905 [2024-12-09 23:23:06.170899] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:41:27.905 [2024-12-09 23:23:06.171036] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:41:27.905 [2024-12-09 23:23:06.353765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:27.905 [2024-12-09 23:23:06.353910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:41:27.905 [2024-12-09 23:23:06.353926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:41:27.905 [2024-12-09 23:23:06.353937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:27.905 [2024-12-09 23:23:06.353978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:27.905 [2024-12-09 23:23:06.353986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:41:27.905 [2024-12-09 23:23:06.353992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:41:27.905 [2024-12-09 23:23:06.353997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:27.905 [2024-12-09 23:23:06.354012] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:41:27.905 [2024-12-09 23:23:06.354551] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:41:27.905 [2024-12-09 23:23:06.354565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:27.905 [2024-12-09 23:23:06.354571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:41:27.905 [2024-12-09 23:23:06.354578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.556 ms 00:41:27.905 [2024-12-09 23:23:06.354583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:27.905 [2024-12-09 23:23:06.355644] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:41:28.164 [2024-12-09 23:23:06.365465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:28.164 [2024-12-09 23:23:06.365492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:41:28.164 [2024-12-09 23:23:06.365501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.822 ms 00:41:28.164 [2024-12-09 23:23:06.365507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:28.164 [2024-12-09 23:23:06.365552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:28.164 [2024-12-09 23:23:06.365560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:41:28.164 [2024-12-09 23:23:06.365567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:41:28.164 [2024-12-09 23:23:06.365572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:28.164 [2024-12-09 23:23:06.370191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:28.164 [2024-12-09 23:23:06.370229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:41:28.164 [2024-12-09 23:23:06.370237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.581 ms 00:41:28.164 [2024-12-09 23:23:06.370243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:28.164 [2024-12-09 23:23:06.370298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:28.164 [2024-12-09 23:23:06.370304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:41:28.164 [2024-12-09 23:23:06.370310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:41:28.164 [2024-12-09 23:23:06.370315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:28.164 [2024-12-09 23:23:06.370356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:28.164 [2024-12-09 23:23:06.370364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:41:28.164 [2024-12-09 23:23:06.370370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:41:28.164 [2024-12-09 23:23:06.370376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:28.164 [2024-12-09 23:23:06.370390] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:41:28.164 [2024-12-09 23:23:06.373066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:28.164 [2024-12-09 23:23:06.373172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:41:28.164 [2024-12-09 23:23:06.373184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.680 ms 00:41:28.164 [2024-12-09 23:23:06.373190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:28.164 [2024-12-09 23:23:06.373229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:28.164 [2024-12-09 23:23:06.373237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:41:28.164 [2024-12-09 23:23:06.373243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:41:28.164 [2024-12-09 23:23:06.373249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:28.164 [2024-12-09 23:23:06.373268] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:41:28.164 [2024-12-09 23:23:06.373283] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:41:28.164 [2024-12-09 23:23:06.373310] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:41:28.164 [2024-12-09 23:23:06.373322] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:41:28.164 [2024-12-09 23:23:06.373414] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:41:28.164 [2024-12-09 23:23:06.373422] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:41:28.164 [2024-12-09 23:23:06.373431] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:41:28.164 [2024-12-09 23:23:06.373440] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:41:28.164 [2024-12-09 23:23:06.373448] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:41:28.165 [2024-12-09 23:23:06.373454] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:41:28.165 [2024-12-09 23:23:06.373460] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:41:28.165 [2024-12-09 23:23:06.373466] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:41:28.165 [2024-12-09 23:23:06.373471] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:41:28.165 [2024-12-09 23:23:06.373477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:28.165 [2024-12-09 23:23:06.373483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:41:28.165 [2024-12-09 23:23:06.373489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.211 ms 00:41:28.165 [2024-12-09 23:23:06.373494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:28.165 [2024-12-09 23:23:06.373559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:28.165 [2024-12-09 23:23:06.373567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:41:28.165 [2024-12-09 23:23:06.373573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:41:28.165 [2024-12-09 23:23:06.373578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:28.165 [2024-12-09 23:23:06.373652] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:41:28.165 [2024-12-09 23:23:06.373660] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:41:28.165 [2024-12-09 23:23:06.373666] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:41:28.165 [2024-12-09 23:23:06.373672] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:41:28.165 [2024-12-09 23:23:06.373678] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:41:28.165 [2024-12-09 23:23:06.373683] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:41:28.165 [2024-12-09 23:23:06.373689] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:41:28.165 [2024-12-09 23:23:06.373694] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:41:28.165 [2024-12-09 23:23:06.373701] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:41:28.165 [2024-12-09 23:23:06.373711] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:41:28.165 [2024-12-09 23:23:06.373716] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:41:28.165 [2024-12-09 23:23:06.373721] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:41:28.165 [2024-12-09 23:23:06.373727] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:41:28.165 [2024-12-09 23:23:06.373732] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:41:28.165 [2024-12-09 23:23:06.373744] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:41:28.165 [2024-12-09 23:23:06.373750] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:41:28.165 [2024-12-09 23:23:06.373755] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:41:28.165 [2024-12-09 23:23:06.373760] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:41:28.165 [2024-12-09 23:23:06.373765] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:41:28.165 [2024-12-09 23:23:06.373771] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:41:28.165 [2024-12-09 23:23:06.373776] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:41:28.165 [2024-12-09 23:23:06.373782] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:41:28.165 [2024-12-09 23:23:06.373787] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:41:28.165 [2024-12-09 23:23:06.373792] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:41:28.165 [2024-12-09 23:23:06.373797] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:41:28.165 [2024-12-09 23:23:06.373803] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:41:28.165 [2024-12-09 23:23:06.373808] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:41:28.165 [2024-12-09 23:23:06.373813] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:41:28.165 [2024-12-09 23:23:06.373818] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:41:28.165 [2024-12-09 23:23:06.373823] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:41:28.165 [2024-12-09 23:23:06.373828] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:41:28.165 [2024-12-09 23:23:06.373833] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:41:28.165 [2024-12-09 23:23:06.373838] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:41:28.165 [2024-12-09 23:23:06.373843] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:41:28.165 [2024-12-09 23:23:06.373848] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:41:28.165 [2024-12-09 23:23:06.373853] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:41:28.165 [2024-12-09 23:23:06.373858] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:41:28.165 [2024-12-09 23:23:06.373864] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:41:28.165 [2024-12-09 23:23:06.373869] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:41:28.165 [2024-12-09 23:23:06.373874] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:41:28.165 [2024-12-09 23:23:06.373879] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:41:28.165 [2024-12-09 23:23:06.373884] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:41:28.165 [2024-12-09 23:23:06.373889] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:41:28.165 [2024-12-09 23:23:06.373894] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:41:28.165 [2024-12-09 23:23:06.373901] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:41:28.165 [2024-12-09 23:23:06.373908] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:41:28.165 [2024-12-09 23:23:06.373915] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:41:28.165 [2024-12-09 23:23:06.373921] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:41:28.165 [2024-12-09 23:23:06.373926] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:41:28.165 [2024-12-09 23:23:06.373931] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:41:28.165 [2024-12-09 23:23:06.373936] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:41:28.165 [2024-12-09 23:23:06.373941] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:41:28.165 [2024-12-09 23:23:06.373946] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:41:28.165 [2024-12-09 23:23:06.373953] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:41:28.165 [2024-12-09 23:23:06.373960] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:41:28.165 [2024-12-09 23:23:06.373967] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:41:28.165 [2024-12-09 23:23:06.373972] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:41:28.165 [2024-12-09 23:23:06.373978] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:41:28.165 [2024-12-09 23:23:06.373983] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:41:28.165 [2024-12-09 23:23:06.373989] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:41:28.165 [2024-12-09 23:23:06.373994] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:41:28.165 [2024-12-09 23:23:06.374000] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:41:28.165 [2024-12-09 23:23:06.374005] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:41:28.165 [2024-12-09 23:23:06.374011] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:41:28.165 [2024-12-09 23:23:06.374016] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:41:28.165 [2024-12-09 23:23:06.374021] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:41:28.165 [2024-12-09 23:23:06.374027] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:41:28.165 [2024-12-09 23:23:06.374033] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:41:28.165 [2024-12-09 23:23:06.374039] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:41:28.165 [2024-12-09 23:23:06.374044] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:41:28.165 [2024-12-09 23:23:06.374050] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:41:28.165 [2024-12-09 23:23:06.374057] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:41:28.165 [2024-12-09 23:23:06.374063] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:41:28.165 [2024-12-09 23:23:06.374068] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:41:28.165 [2024-12-09 23:23:06.374074] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:41:28.165 [2024-12-09 23:23:06.374080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:28.165 [2024-12-09 23:23:06.374086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:41:28.165 [2024-12-09 23:23:06.374092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.483 ms 00:41:28.165 [2024-12-09 23:23:06.374098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:28.165 [2024-12-09 23:23:06.394727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:28.165 [2024-12-09 23:23:06.394757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:41:28.165 [2024-12-09 23:23:06.394765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.595 ms 00:41:28.165 [2024-12-09 23:23:06.394772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:28.165 [2024-12-09 23:23:06.394838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:28.165 [2024-12-09 23:23:06.394845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:41:28.165 [2024-12-09 23:23:06.394851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:41:28.165 [2024-12-09 23:23:06.394856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:28.165 [2024-12-09 23:23:06.437400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:28.166 [2024-12-09 23:23:06.437434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:41:28.166 [2024-12-09 23:23:06.437446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.504 ms 00:41:28.166 [2024-12-09 23:23:06.437452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:28.166 [2024-12-09 23:23:06.437485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:28.166 [2024-12-09 23:23:06.437493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:41:28.166 [2024-12-09 23:23:06.437499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:41:28.166 [2024-12-09 23:23:06.437505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:28.166 [2024-12-09 23:23:06.437824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:28.166 [2024-12-09 23:23:06.437836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:41:28.166 [2024-12-09 23:23:06.437843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.278 ms 00:41:28.166 [2024-12-09 23:23:06.437854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:28.166 [2024-12-09 23:23:06.437949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:28.166 [2024-12-09 23:23:06.437956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:41:28.166 [2024-12-09 23:23:06.437962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:41:28.166 [2024-12-09 23:23:06.437968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:28.166 [2024-12-09 23:23:06.448403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:28.166 [2024-12-09 23:23:06.448517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:41:28.166 [2024-12-09 23:23:06.448531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.419 ms 00:41:28.166 [2024-12-09 23:23:06.448537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:28.166 [2024-12-09 23:23:06.458375] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:41:28.166 [2024-12-09 23:23:06.458404] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:41:28.166 [2024-12-09 23:23:06.458414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:28.166 [2024-12-09 23:23:06.458421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:41:28.166 [2024-12-09 23:23:06.458428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.789 ms 00:41:28.166 [2024-12-09 23:23:06.458434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:28.166 [2024-12-09 23:23:06.476737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:28.166 [2024-12-09 23:23:06.476764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:41:28.166 [2024-12-09 23:23:06.476773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.272 ms 00:41:28.166 [2024-12-09 23:23:06.476779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:28.166 [2024-12-09 23:23:06.485596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:28.166 [2024-12-09 23:23:06.485623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:41:28.166 [2024-12-09 23:23:06.485631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.787 ms 00:41:28.166 [2024-12-09 23:23:06.485637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:28.166 [2024-12-09 23:23:06.494331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:28.166 [2024-12-09 23:23:06.494357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:41:28.166 [2024-12-09 23:23:06.494364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.669 ms 00:41:28.166 [2024-12-09 23:23:06.494370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:28.166 [2024-12-09 23:23:06.494819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:28.166 [2024-12-09 23:23:06.494839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:41:28.166 [2024-12-09 23:23:06.494846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.393 ms 00:41:28.166 [2024-12-09 23:23:06.494852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:28.166 [2024-12-09 23:23:06.539189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:28.166 [2024-12-09 23:23:06.539251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:41:28.166 [2024-12-09 23:23:06.539272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.324 ms 00:41:28.166 [2024-12-09 23:23:06.539279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:28.166 [2024-12-09 23:23:06.547445] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:41:28.166 [2024-12-09 23:23:06.549413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:28.166 [2024-12-09 23:23:06.549438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:41:28.166 [2024-12-09 23:23:06.549447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.095 ms 00:41:28.166 [2024-12-09 23:23:06.549457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:28.166 [2024-12-09 23:23:06.549514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:28.166 [2024-12-09 23:23:06.549523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:41:28.166 [2024-12-09 23:23:06.549531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:41:28.166 [2024-12-09 23:23:06.549537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:28.166 [2024-12-09 23:23:06.549602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:28.166 [2024-12-09 23:23:06.549611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:41:28.166 [2024-12-09 23:23:06.549619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:41:28.166 [2024-12-09 23:23:06.549625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:28.166 [2024-12-09 23:23:06.549642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:28.166 [2024-12-09 23:23:06.549648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:41:28.166 [2024-12-09 23:23:06.549655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:41:28.166 [2024-12-09 23:23:06.549661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:28.166 [2024-12-09 23:23:06.549686] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:41:28.166 [2024-12-09 23:23:06.549694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:28.166 [2024-12-09 23:23:06.549699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:41:28.166 [2024-12-09 23:23:06.549705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:41:28.166 [2024-12-09 23:23:06.549714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:28.166 [2024-12-09 23:23:06.567622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:28.166 [2024-12-09 23:23:06.567651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:41:28.166 [2024-12-09 23:23:06.567660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.895 ms 00:41:28.166 [2024-12-09 23:23:06.567666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:28.166 [2024-12-09 23:23:06.567718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:28.166 [2024-12-09 23:23:06.567726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:41:28.166 [2024-12-09 23:23:06.567732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:41:28.166 [2024-12-09 23:23:06.567738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:28.166 [2024-12-09 23:23:06.568481] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 214.370 ms, result 0 00:41:29.539  [2024-12-09T23:23:08.935Z] Copying: 37/1024 [MB] (37 MBps) [2024-12-09T23:23:09.868Z] Copying: 55/1024 [MB] (17 MBps) [2024-12-09T23:23:10.827Z] Copying: 82/1024 [MB] (26 MBps) [2024-12-09T23:23:11.761Z] Copying: 106/1024 [MB] (23 MBps) [2024-12-09T23:23:12.695Z] Copying: 128/1024 [MB] (22 MBps) [2024-12-09T23:23:13.628Z] Copying: 148/1024 [MB] (20 MBps) [2024-12-09T23:23:15.001Z] Copying: 165/1024 [MB] (17 MBps) [2024-12-09T23:23:15.940Z] Copying: 182/1024 [MB] (16 MBps) [2024-12-09T23:23:16.877Z] Copying: 205/1024 [MB] (23 MBps) [2024-12-09T23:23:17.814Z] Copying: 227/1024 [MB] (21 MBps) [2024-12-09T23:23:18.749Z] Copying: 249/1024 [MB] (22 MBps) [2024-12-09T23:23:19.683Z] Copying: 271/1024 [MB] (21 MBps) [2024-12-09T23:23:20.618Z] Copying: 286/1024 [MB] (15 MBps) [2024-12-09T23:23:22.002Z] Copying: 306/1024 [MB] (19 MBps) [2024-12-09T23:23:22.941Z] Copying: 325/1024 [MB] (18 MBps) [2024-12-09T23:23:23.882Z] Copying: 340/1024 [MB] (15 MBps) [2024-12-09T23:23:24.824Z] Copying: 360/1024 [MB] (19 MBps) [2024-12-09T23:23:25.767Z] Copying: 373/1024 [MB] (13 MBps) [2024-12-09T23:23:26.712Z] Copying: 391608/1048576 [kB] (9292 kBps) [2024-12-09T23:23:27.657Z] Copying: 394/1024 [MB] (11 MBps) [2024-12-09T23:23:28.601Z] Copying: 412816/1048576 [kB] (9084 kBps) [2024-12-09T23:23:29.989Z] Copying: 422220/1048576 [kB] (9404 kBps) [2024-12-09T23:23:30.933Z] Copying: 431932/1048576 [kB] (9712 kBps) [2024-12-09T23:23:31.870Z] Copying: 441932/1048576 [kB] (10000 kBps) [2024-12-09T23:23:32.828Z] Copying: 441/1024 [MB] (10 MBps) [2024-12-09T23:23:33.841Z] Copying: 462188/1048576 [kB] (9980 kBps) [2024-12-09T23:23:34.785Z] Copying: 471328/1048576 [kB] (9140 kBps) [2024-12-09T23:23:35.727Z] Copying: 480344/1048576 [kB] (9016 kBps) [2024-12-09T23:23:36.670Z] Copying: 481/1024 [MB] (12 MBps) [2024-12-09T23:23:37.615Z] Copying: 524/1024 [MB] (42 MBps) [2024-12-09T23:23:39.003Z] Copying: 567/1024 [MB] (43 MBps) [2024-12-09T23:23:39.945Z] Copying: 610/1024 [MB] (42 MBps) [2024-12-09T23:23:40.887Z] Copying: 654/1024 [MB] (43 MBps) [2024-12-09T23:23:41.830Z] Copying: 698/1024 [MB] (43 MBps) [2024-12-09T23:23:42.774Z] Copying: 742/1024 [MB] (43 MBps) [2024-12-09T23:23:43.718Z] Copying: 767/1024 [MB] (25 MBps) [2024-12-09T23:23:44.659Z] Copying: 811/1024 [MB] (44 MBps) [2024-12-09T23:23:45.600Z] Copying: 855/1024 [MB] (44 MBps) [2024-12-09T23:23:46.985Z] Copying: 899/1024 [MB] (44 MBps) [2024-12-09T23:23:47.925Z] Copying: 944/1024 [MB] (44 MBps) [2024-12-09T23:23:48.864Z] Copying: 987/1024 [MB] (43 MBps) [2024-12-09T23:23:49.432Z] Copying: 1023/1024 [MB] (35 MBps) [2024-12-09T23:23:49.432Z] Copying: 1024/1024 [MB] (average 23 MBps)[2024-12-09 23:23:49.422956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:10.970 [2024-12-09 23:23:49.423010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:42:10.970 [2024-12-09 23:23:49.423023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:42:10.970 [2024-12-09 23:23:49.423030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:10.970 [2024-12-09 23:23:49.424228] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:42:10.970 [2024-12-09 23:23:49.429880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:11.229 [2024-12-09 23:23:49.430013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:42:11.229 [2024-12-09 23:23:49.430027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.620 ms 00:42:11.229 [2024-12-09 23:23:49.430039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:11.229 [2024-12-09 23:23:49.437256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:11.229 [2024-12-09 23:23:49.437285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:42:11.229 [2024-12-09 23:23:49.437293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.339 ms 00:42:11.229 [2024-12-09 23:23:49.437300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:11.229 [2024-12-09 23:23:49.452897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:11.229 [2024-12-09 23:23:49.453006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:42:11.229 [2024-12-09 23:23:49.453020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.585 ms 00:42:11.229 [2024-12-09 23:23:49.453026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:11.229 [2024-12-09 23:23:49.457697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:11.229 [2024-12-09 23:23:49.457722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:42:11.229 [2024-12-09 23:23:49.457730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.642 ms 00:42:11.229 [2024-12-09 23:23:49.457736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:11.229 [2024-12-09 23:23:49.476445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:11.229 [2024-12-09 23:23:49.476475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:42:11.229 [2024-12-09 23:23:49.476484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.667 ms 00:42:11.229 [2024-12-09 23:23:49.476490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:11.229 [2024-12-09 23:23:49.488267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:11.229 [2024-12-09 23:23:49.488301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:42:11.229 [2024-12-09 23:23:49.488310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.749 ms 00:42:11.229 [2024-12-09 23:23:49.488317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:11.229 [2024-12-09 23:23:49.542412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:11.229 [2024-12-09 23:23:49.542441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:42:11.229 [2024-12-09 23:23:49.542454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.067 ms 00:42:11.229 [2024-12-09 23:23:49.542460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:11.229 [2024-12-09 23:23:49.560312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:11.229 [2024-12-09 23:23:49.560336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:42:11.229 [2024-12-09 23:23:49.560344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.840 ms 00:42:11.229 [2024-12-09 23:23:49.560358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:11.229 [2024-12-09 23:23:49.577770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:11.229 [2024-12-09 23:23:49.577796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:42:11.229 [2024-12-09 23:23:49.577803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.386 ms 00:42:11.229 [2024-12-09 23:23:49.577809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:11.229 [2024-12-09 23:23:49.594803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:11.229 [2024-12-09 23:23:49.594834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:42:11.229 [2024-12-09 23:23:49.594842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.968 ms 00:42:11.229 [2024-12-09 23:23:49.594847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:11.229 [2024-12-09 23:23:49.611641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:11.229 [2024-12-09 23:23:49.611750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:42:11.229 [2024-12-09 23:23:49.611764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.750 ms 00:42:11.229 [2024-12-09 23:23:49.611769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:11.229 [2024-12-09 23:23:49.611792] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:42:11.229 [2024-12-09 23:23:49.611803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 127232 / 261120 wr_cnt: 1 state: open 00:42:11.229 [2024-12-09 23:23:49.611812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:42:11.229 [2024-12-09 23:23:49.611818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:42:11.229 [2024-12-09 23:23:49.611825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:42:11.229 [2024-12-09 23:23:49.611831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:42:11.229 [2024-12-09 23:23:49.611837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:42:11.229 [2024-12-09 23:23:49.611842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:42:11.229 [2024-12-09 23:23:49.611848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:42:11.229 [2024-12-09 23:23:49.611853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:42:11.229 [2024-12-09 23:23:49.611859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:42:11.229 [2024-12-09 23:23:49.611865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:42:11.229 [2024-12-09 23:23:49.611871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:42:11.229 [2024-12-09 23:23:49.611877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:42:11.229 [2024-12-09 23:23:49.611882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:42:11.229 [2024-12-09 23:23:49.611888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:42:11.229 [2024-12-09 23:23:49.611895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:42:11.229 [2024-12-09 23:23:49.611901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:42:11.229 [2024-12-09 23:23:49.611907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:42:11.229 [2024-12-09 23:23:49.611913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:42:11.229 [2024-12-09 23:23:49.611918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:42:11.229 [2024-12-09 23:23:49.611924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:42:11.229 [2024-12-09 23:23:49.611929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:42:11.229 [2024-12-09 23:23:49.611935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:42:11.229 [2024-12-09 23:23:49.611941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:42:11.229 [2024-12-09 23:23:49.611947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:42:11.230 [2024-12-09 23:23:49.611953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:42:11.230 [2024-12-09 23:23:49.611958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:42:11.230 [2024-12-09 23:23:49.612127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:42:11.230 [2024-12-09 23:23:49.612134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:42:11.230 [2024-12-09 23:23:49.612140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:42:11.230 [2024-12-09 23:23:49.612146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:42:11.230 [2024-12-09 23:23:49.612152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:42:11.230 [2024-12-09 23:23:49.612158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:42:11.230 [2024-12-09 23:23:49.612164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:42:11.230 [2024-12-09 23:23:49.612170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:42:11.230 [2024-12-09 23:23:49.612176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:42:11.230 [2024-12-09 23:23:49.612182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:42:11.230 [2024-12-09 23:23:49.612188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:42:11.230 [2024-12-09 23:23:49.612193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:42:11.230 [2024-12-09 23:23:49.612199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:42:11.230 [2024-12-09 23:23:49.612205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:42:11.230 [2024-12-09 23:23:49.612211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:42:11.230 [2024-12-09 23:23:49.612232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:42:11.230 [2024-12-09 23:23:49.612239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:42:11.230 [2024-12-09 23:23:49.612244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:42:11.230 [2024-12-09 23:23:49.612250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:42:11.230 [2024-12-09 23:23:49.612256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:42:11.230 [2024-12-09 23:23:49.612262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:42:11.230 [2024-12-09 23:23:49.612267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:42:11.230 [2024-12-09 23:23:49.612274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:42:11.230 [2024-12-09 23:23:49.612280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:42:11.230 [2024-12-09 23:23:49.612286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:42:11.230 [2024-12-09 23:23:49.612291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:42:11.230 [2024-12-09 23:23:49.612297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:42:11.230 [2024-12-09 23:23:49.612303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:42:11.230 [2024-12-09 23:23:49.612309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:42:11.230 [2024-12-09 23:23:49.612317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:42:11.230 [2024-12-09 23:23:49.612323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:42:11.230 [2024-12-09 23:23:49.612329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:42:11.230 [2024-12-09 23:23:49.612335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:42:11.230 [2024-12-09 23:23:49.612341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:42:11.230 [2024-12-09 23:23:49.612347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:42:11.230 [2024-12-09 23:23:49.612353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:42:11.230 [2024-12-09 23:23:49.612359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:42:11.230 [2024-12-09 23:23:49.612364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:42:11.230 [2024-12-09 23:23:49.612370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:42:11.230 [2024-12-09 23:23:49.612376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:42:11.230 [2024-12-09 23:23:49.612382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:42:11.230 [2024-12-09 23:23:49.612388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:42:11.230 [2024-12-09 23:23:49.612393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:42:11.230 [2024-12-09 23:23:49.612400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:42:11.230 [2024-12-09 23:23:49.612406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:42:11.230 [2024-12-09 23:23:49.612412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:42:11.230 [2024-12-09 23:23:49.612417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:42:11.230 [2024-12-09 23:23:49.612423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:42:11.230 [2024-12-09 23:23:49.612433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:42:11.230 [2024-12-09 23:23:49.612439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:42:11.230 [2024-12-09 23:23:49.612444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:42:11.230 [2024-12-09 23:23:49.612450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:42:11.230 [2024-12-09 23:23:49.612456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:42:11.230 [2024-12-09 23:23:49.612462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:42:11.230 [2024-12-09 23:23:49.612467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:42:11.230 [2024-12-09 23:23:49.612473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:42:11.230 [2024-12-09 23:23:49.612478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:42:11.230 [2024-12-09 23:23:49.612484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:42:11.230 [2024-12-09 23:23:49.612490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:42:11.230 [2024-12-09 23:23:49.612496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:42:11.230 [2024-12-09 23:23:49.612502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:42:11.230 [2024-12-09 23:23:49.612509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:42:11.230 [2024-12-09 23:23:49.612515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:42:11.230 [2024-12-09 23:23:49.612521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:42:11.230 [2024-12-09 23:23:49.612527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:42:11.230 [2024-12-09 23:23:49.612533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:42:11.230 [2024-12-09 23:23:49.612540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:42:11.230 [2024-12-09 23:23:49.612545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:42:11.230 [2024-12-09 23:23:49.612551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:42:11.230 [2024-12-09 23:23:49.612556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:42:11.230 [2024-12-09 23:23:49.612561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:42:11.230 [2024-12-09 23:23:49.612567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:42:11.230 [2024-12-09 23:23:49.612573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:42:11.230 [2024-12-09 23:23:49.612585] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:42:11.230 [2024-12-09 23:23:49.612591] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 290e4479-0ec0-4dd4-8c2a-b1b4f77d8eec 00:42:11.230 [2024-12-09 23:23:49.612605] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 127232 00:42:11.230 [2024-12-09 23:23:49.612611] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 128192 00:42:11.230 [2024-12-09 23:23:49.612616] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 127232 00:42:11.230 [2024-12-09 23:23:49.612622] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0075 00:42:11.230 [2024-12-09 23:23:49.612628] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:42:11.230 [2024-12-09 23:23:49.612634] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:42:11.230 [2024-12-09 23:23:49.612639] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:42:11.230 [2024-12-09 23:23:49.612644] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:42:11.230 [2024-12-09 23:23:49.612649] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:42:11.230 [2024-12-09 23:23:49.612654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:11.230 [2024-12-09 23:23:49.612661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:42:11.230 [2024-12-09 23:23:49.612667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.863 ms 00:42:11.230 [2024-12-09 23:23:49.612673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:11.230 [2024-12-09 23:23:49.622480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:11.230 [2024-12-09 23:23:49.622579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:42:11.230 [2024-12-09 23:23:49.622591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.795 ms 00:42:11.230 [2024-12-09 23:23:49.622597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:11.230 [2024-12-09 23:23:49.622883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:11.230 [2024-12-09 23:23:49.622892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:42:11.231 [2024-12-09 23:23:49.622903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.271 ms 00:42:11.231 [2024-12-09 23:23:49.622910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:11.231 [2024-12-09 23:23:49.649878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:11.231 [2024-12-09 23:23:49.649978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:42:11.231 [2024-12-09 23:23:49.649990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:11.231 [2024-12-09 23:23:49.649996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:11.231 [2024-12-09 23:23:49.650037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:11.231 [2024-12-09 23:23:49.650045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:42:11.231 [2024-12-09 23:23:49.650055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:11.231 [2024-12-09 23:23:49.650061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:11.231 [2024-12-09 23:23:49.650105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:11.231 [2024-12-09 23:23:49.650113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:42:11.231 [2024-12-09 23:23:49.650120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:11.231 [2024-12-09 23:23:49.650125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:11.231 [2024-12-09 23:23:49.650138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:11.231 [2024-12-09 23:23:49.650145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:42:11.231 [2024-12-09 23:23:49.650151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:11.231 [2024-12-09 23:23:49.650157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:11.491 [2024-12-09 23:23:49.712165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:11.491 [2024-12-09 23:23:49.712201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:42:11.491 [2024-12-09 23:23:49.712212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:11.491 [2024-12-09 23:23:49.712229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:11.491 [2024-12-09 23:23:49.762906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:11.491 [2024-12-09 23:23:49.762945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:42:11.491 [2024-12-09 23:23:49.762956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:11.491 [2024-12-09 23:23:49.762967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:11.491 [2024-12-09 23:23:49.763035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:11.491 [2024-12-09 23:23:49.763043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:42:11.491 [2024-12-09 23:23:49.763050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:11.491 [2024-12-09 23:23:49.763056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:11.491 [2024-12-09 23:23:49.763085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:11.491 [2024-12-09 23:23:49.763093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:42:11.491 [2024-12-09 23:23:49.763100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:11.491 [2024-12-09 23:23:49.763106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:11.491 [2024-12-09 23:23:49.763185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:11.491 [2024-12-09 23:23:49.763194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:42:11.491 [2024-12-09 23:23:49.763201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:11.491 [2024-12-09 23:23:49.763207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:11.491 [2024-12-09 23:23:49.763246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:11.491 [2024-12-09 23:23:49.763254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:42:11.491 [2024-12-09 23:23:49.763278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:11.491 [2024-12-09 23:23:49.763284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:11.491 [2024-12-09 23:23:49.763322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:11.491 [2024-12-09 23:23:49.763331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:42:11.491 [2024-12-09 23:23:49.763338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:11.491 [2024-12-09 23:23:49.763344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:11.491 [2024-12-09 23:23:49.763383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:11.491 [2024-12-09 23:23:49.763392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:42:11.491 [2024-12-09 23:23:49.763398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:11.491 [2024-12-09 23:23:49.763405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:11.491 [2024-12-09 23:23:49.763511] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 341.711 ms, result 0 00:42:13.400 00:42:13.400 00:42:13.400 23:23:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:42:15.313 23:23:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:42:15.313 [2024-12-09 23:23:53.664912] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:42:15.313 [2024-12-09 23:23:53.665271] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81236 ] 00:42:15.571 [2024-12-09 23:23:53.822853] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:15.571 [2024-12-09 23:23:53.911210] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:15.831 [2024-12-09 23:23:54.145245] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:42:15.831 [2024-12-09 23:23:54.145304] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:42:16.166 [2024-12-09 23:23:54.301724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:16.166 [2024-12-09 23:23:54.301768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:42:16.166 [2024-12-09 23:23:54.301780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:42:16.166 [2024-12-09 23:23:54.301787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:16.166 [2024-12-09 23:23:54.301828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:16.166 [2024-12-09 23:23:54.301839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:42:16.166 [2024-12-09 23:23:54.301846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:42:16.166 [2024-12-09 23:23:54.301852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:16.166 [2024-12-09 23:23:54.301866] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:42:16.166 [2024-12-09 23:23:54.302434] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:42:16.166 [2024-12-09 23:23:54.302450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:16.166 [2024-12-09 23:23:54.302458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:42:16.166 [2024-12-09 23:23:54.302465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.589 ms 00:42:16.166 [2024-12-09 23:23:54.302471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:16.166 [2024-12-09 23:23:54.303720] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:42:16.166 [2024-12-09 23:23:54.314244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:16.166 [2024-12-09 23:23:54.314272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:42:16.166 [2024-12-09 23:23:54.314281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.525 ms 00:42:16.166 [2024-12-09 23:23:54.314288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:16.166 [2024-12-09 23:23:54.314337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:16.166 [2024-12-09 23:23:54.314345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:42:16.166 [2024-12-09 23:23:54.314351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:42:16.166 [2024-12-09 23:23:54.314357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:16.166 [2024-12-09 23:23:54.320625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:16.166 [2024-12-09 23:23:54.320651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:42:16.166 [2024-12-09 23:23:54.320659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.227 ms 00:42:16.166 [2024-12-09 23:23:54.320668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:16.166 [2024-12-09 23:23:54.320722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:16.166 [2024-12-09 23:23:54.320730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:42:16.166 [2024-12-09 23:23:54.320736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:42:16.166 [2024-12-09 23:23:54.320742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:16.166 [2024-12-09 23:23:54.320781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:16.166 [2024-12-09 23:23:54.320789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:42:16.166 [2024-12-09 23:23:54.320797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:42:16.166 [2024-12-09 23:23:54.320803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:16.166 [2024-12-09 23:23:54.320821] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:42:16.166 [2024-12-09 23:23:54.323773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:16.166 [2024-12-09 23:23:54.323797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:42:16.166 [2024-12-09 23:23:54.323806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.957 ms 00:42:16.166 [2024-12-09 23:23:54.323813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:16.166 [2024-12-09 23:23:54.323843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:16.166 [2024-12-09 23:23:54.323850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:42:16.167 [2024-12-09 23:23:54.323857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:42:16.167 [2024-12-09 23:23:54.323862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:16.167 [2024-12-09 23:23:54.323877] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:42:16.167 [2024-12-09 23:23:54.323894] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:42:16.167 [2024-12-09 23:23:54.323922] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:42:16.167 [2024-12-09 23:23:54.323937] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:42:16.167 [2024-12-09 23:23:54.324020] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:42:16.167 [2024-12-09 23:23:54.324029] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:42:16.167 [2024-12-09 23:23:54.324039] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:42:16.167 [2024-12-09 23:23:54.324046] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:42:16.167 [2024-12-09 23:23:54.324053] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:42:16.167 [2024-12-09 23:23:54.324060] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:42:16.167 [2024-12-09 23:23:54.324066] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:42:16.167 [2024-12-09 23:23:54.324074] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:42:16.167 [2024-12-09 23:23:54.324080] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:42:16.167 [2024-12-09 23:23:54.324086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:16.167 [2024-12-09 23:23:54.324093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:42:16.167 [2024-12-09 23:23:54.324099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.212 ms 00:42:16.167 [2024-12-09 23:23:54.324104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:16.167 [2024-12-09 23:23:54.324167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:16.167 [2024-12-09 23:23:54.324174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:42:16.167 [2024-12-09 23:23:54.324180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:42:16.167 [2024-12-09 23:23:54.324185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:16.167 [2024-12-09 23:23:54.324272] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:42:16.167 [2024-12-09 23:23:54.324282] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:42:16.167 [2024-12-09 23:23:54.324289] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:42:16.167 [2024-12-09 23:23:54.324295] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:42:16.167 [2024-12-09 23:23:54.324301] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:42:16.167 [2024-12-09 23:23:54.324307] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:42:16.167 [2024-12-09 23:23:54.324313] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:42:16.167 [2024-12-09 23:23:54.324319] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:42:16.167 [2024-12-09 23:23:54.324325] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:42:16.167 [2024-12-09 23:23:54.324330] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:42:16.167 [2024-12-09 23:23:54.324336] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:42:16.167 [2024-12-09 23:23:54.324342] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:42:16.167 [2024-12-09 23:23:54.324349] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:42:16.167 [2024-12-09 23:23:54.324362] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:42:16.167 [2024-12-09 23:23:54.324367] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:42:16.167 [2024-12-09 23:23:54.324372] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:42:16.167 [2024-12-09 23:23:54.324378] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:42:16.167 [2024-12-09 23:23:54.324383] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:42:16.167 [2024-12-09 23:23:54.324387] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:42:16.167 [2024-12-09 23:23:54.324393] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:42:16.167 [2024-12-09 23:23:54.324398] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:42:16.167 [2024-12-09 23:23:54.324425] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:42:16.167 [2024-12-09 23:23:54.324431] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:42:16.167 [2024-12-09 23:23:54.324437] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:42:16.167 [2024-12-09 23:23:54.324441] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:42:16.167 [2024-12-09 23:23:54.324446] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:42:16.167 [2024-12-09 23:23:54.324451] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:42:16.167 [2024-12-09 23:23:54.324456] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:42:16.167 [2024-12-09 23:23:54.324461] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:42:16.167 [2024-12-09 23:23:54.324466] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:42:16.167 [2024-12-09 23:23:54.324471] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:42:16.167 [2024-12-09 23:23:54.324476] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:42:16.167 [2024-12-09 23:23:54.324481] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:42:16.167 [2024-12-09 23:23:54.324486] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:42:16.167 [2024-12-09 23:23:54.324491] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:42:16.167 [2024-12-09 23:23:54.324496] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:42:16.167 [2024-12-09 23:23:54.324501] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:42:16.167 [2024-12-09 23:23:54.324506] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:42:16.167 [2024-12-09 23:23:54.324511] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:42:16.167 [2024-12-09 23:23:54.324516] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:42:16.167 [2024-12-09 23:23:54.324521] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:42:16.167 [2024-12-09 23:23:54.324526] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:42:16.167 [2024-12-09 23:23:54.324532] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:42:16.167 [2024-12-09 23:23:54.324537] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:42:16.167 [2024-12-09 23:23:54.324545] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:42:16.167 [2024-12-09 23:23:54.324552] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:42:16.167 [2024-12-09 23:23:54.324558] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:42:16.167 [2024-12-09 23:23:54.324564] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:42:16.167 [2024-12-09 23:23:54.324569] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:42:16.167 [2024-12-09 23:23:54.324574] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:42:16.167 [2024-12-09 23:23:54.324579] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:42:16.167 [2024-12-09 23:23:54.324585] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:42:16.167 [2024-12-09 23:23:54.324590] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:42:16.167 [2024-12-09 23:23:54.324597] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:42:16.167 [2024-12-09 23:23:54.324603] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:42:16.167 [2024-12-09 23:23:54.324612] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:42:16.167 [2024-12-09 23:23:54.324619] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:42:16.167 [2024-12-09 23:23:54.324625] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:42:16.167 [2024-12-09 23:23:54.324630] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:42:16.167 [2024-12-09 23:23:54.324636] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:42:16.167 [2024-12-09 23:23:54.324641] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:42:16.167 [2024-12-09 23:23:54.324647] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:42:16.167 [2024-12-09 23:23:54.324660] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:42:16.167 [2024-12-09 23:23:54.324666] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:42:16.167 [2024-12-09 23:23:54.324671] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:42:16.167 [2024-12-09 23:23:54.324677] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:42:16.167 [2024-12-09 23:23:54.324682] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:42:16.167 [2024-12-09 23:23:54.324688] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:42:16.167 [2024-12-09 23:23:54.324693] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:42:16.167 [2024-12-09 23:23:54.324699] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:42:16.167 [2024-12-09 23:23:54.324705] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:42:16.167 [2024-12-09 23:23:54.324711] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:42:16.167 [2024-12-09 23:23:54.324717] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:42:16.167 [2024-12-09 23:23:54.324722] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:42:16.167 [2024-12-09 23:23:54.324727] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:42:16.167 [2024-12-09 23:23:54.324732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:16.167 [2024-12-09 23:23:54.324739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:42:16.167 [2024-12-09 23:23:54.324745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.525 ms 00:42:16.168 [2024-12-09 23:23:54.324751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:16.168 [2024-12-09 23:23:54.348895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:16.168 [2024-12-09 23:23:54.348927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:42:16.168 [2024-12-09 23:23:54.348936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.101 ms 00:42:16.168 [2024-12-09 23:23:54.348946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:16.168 [2024-12-09 23:23:54.349016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:16.168 [2024-12-09 23:23:54.349022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:42:16.168 [2024-12-09 23:23:54.349029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:42:16.168 [2024-12-09 23:23:54.349035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:16.168 [2024-12-09 23:23:54.390316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:16.168 [2024-12-09 23:23:54.390349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:42:16.168 [2024-12-09 23:23:54.390359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.238 ms 00:42:16.168 [2024-12-09 23:23:54.390366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:16.168 [2024-12-09 23:23:54.390400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:16.168 [2024-12-09 23:23:54.390409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:42:16.168 [2024-12-09 23:23:54.390418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:42:16.168 [2024-12-09 23:23:54.390424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:16.168 [2024-12-09 23:23:54.390829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:16.168 [2024-12-09 23:23:54.390843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:42:16.168 [2024-12-09 23:23:54.390850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.362 ms 00:42:16.168 [2024-12-09 23:23:54.390856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:16.168 [2024-12-09 23:23:54.390971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:16.168 [2024-12-09 23:23:54.390979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:42:16.168 [2024-12-09 23:23:54.390990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:42:16.168 [2024-12-09 23:23:54.390999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:16.168 [2024-12-09 23:23:54.402853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:16.168 [2024-12-09 23:23:54.402880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:42:16.168 [2024-12-09 23:23:54.402890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.837 ms 00:42:16.168 [2024-12-09 23:23:54.402896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:16.168 [2024-12-09 23:23:54.413739] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:42:16.168 [2024-12-09 23:23:54.413890] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:42:16.168 [2024-12-09 23:23:54.413904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:16.168 [2024-12-09 23:23:54.413911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:42:16.168 [2024-12-09 23:23:54.413919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.914 ms 00:42:16.168 [2024-12-09 23:23:54.413924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:16.168 [2024-12-09 23:23:54.432839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:16.168 [2024-12-09 23:23:54.432941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:42:16.168 [2024-12-09 23:23:54.432954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.886 ms 00:42:16.168 [2024-12-09 23:23:54.432960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:16.168 [2024-12-09 23:23:54.442225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:16.168 [2024-12-09 23:23:54.442250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:42:16.168 [2024-12-09 23:23:54.442258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.235 ms 00:42:16.168 [2024-12-09 23:23:54.442265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:16.168 [2024-12-09 23:23:54.451600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:16.168 [2024-12-09 23:23:54.451694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:42:16.168 [2024-12-09 23:23:54.451706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.308 ms 00:42:16.168 [2024-12-09 23:23:54.451713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:16.168 [2024-12-09 23:23:54.452413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:16.168 [2024-12-09 23:23:54.452440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:42:16.168 [2024-12-09 23:23:54.452451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.431 ms 00:42:16.168 [2024-12-09 23:23:54.452458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:16.168 [2024-12-09 23:23:54.501234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:16.168 [2024-12-09 23:23:54.501275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:42:16.168 [2024-12-09 23:23:54.501291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.761 ms 00:42:16.168 [2024-12-09 23:23:54.501299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:16.168 [2024-12-09 23:23:54.509567] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:42:16.168 [2024-12-09 23:23:54.511980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:16.168 [2024-12-09 23:23:54.512005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:42:16.168 [2024-12-09 23:23:54.512015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.644 ms 00:42:16.168 [2024-12-09 23:23:54.512022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:16.168 [2024-12-09 23:23:54.512082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:16.168 [2024-12-09 23:23:54.512091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:42:16.168 [2024-12-09 23:23:54.512101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:42:16.168 [2024-12-09 23:23:54.512108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:16.168 [2024-12-09 23:23:54.513751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:16.168 [2024-12-09 23:23:54.513846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:42:16.168 [2024-12-09 23:23:54.513888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.597 ms 00:42:16.168 [2024-12-09 23:23:54.513907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:16.168 [2024-12-09 23:23:54.513939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:16.168 [2024-12-09 23:23:54.513956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:42:16.168 [2024-12-09 23:23:54.513972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:42:16.168 [2024-12-09 23:23:54.513979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:16.168 [2024-12-09 23:23:54.514015] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:42:16.168 [2024-12-09 23:23:54.514025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:16.168 [2024-12-09 23:23:54.514032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:42:16.168 [2024-12-09 23:23:54.514038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:42:16.168 [2024-12-09 23:23:54.514045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:16.168 [2024-12-09 23:23:54.532659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:16.168 [2024-12-09 23:23:54.532687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:42:16.168 [2024-12-09 23:23:54.532700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.600 ms 00:42:16.168 [2024-12-09 23:23:54.532707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:16.168 [2024-12-09 23:23:54.532765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:16.168 [2024-12-09 23:23:54.532773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:42:16.168 [2024-12-09 23:23:54.532780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:42:16.168 [2024-12-09 23:23:54.532786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:16.168 [2024-12-09 23:23:54.533687] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 231.571 ms, result 0 00:42:17.542  [2024-12-09T23:23:56.945Z] Copying: 2264/1048576 [kB] (2264 kBps) [2024-12-09T23:23:57.885Z] Copying: 13/1024 [MB] (11 MBps) [2024-12-09T23:23:58.828Z] Copying: 36/1024 [MB] (23 MBps) [2024-12-09T23:23:59.767Z] Copying: 66/1024 [MB] (29 MBps) [2024-12-09T23:24:00.706Z] Copying: 90/1024 [MB] (24 MBps) [2024-12-09T23:24:02.080Z] Copying: 109/1024 [MB] (18 MBps) [2024-12-09T23:24:03.016Z] Copying: 134/1024 [MB] (25 MBps) [2024-12-09T23:24:03.955Z] Copying: 166/1024 [MB] (31 MBps) [2024-12-09T23:24:04.894Z] Copying: 219/1024 [MB] (52 MBps) [2024-12-09T23:24:05.836Z] Copying: 275/1024 [MB] (56 MBps) [2024-12-09T23:24:06.778Z] Copying: 326/1024 [MB] (51 MBps) [2024-12-09T23:24:07.714Z] Copying: 379/1024 [MB] (52 MBps) [2024-12-09T23:24:09.099Z] Copying: 438/1024 [MB] (59 MBps) [2024-12-09T23:24:10.042Z] Copying: 492/1024 [MB] (53 MBps) [2024-12-09T23:24:10.984Z] Copying: 546/1024 [MB] (53 MBps) [2024-12-09T23:24:11.966Z] Copying: 598/1024 [MB] (52 MBps) [2024-12-09T23:24:12.910Z] Copying: 651/1024 [MB] (52 MBps) [2024-12-09T23:24:13.848Z] Copying: 683/1024 [MB] (32 MBps) [2024-12-09T23:24:14.800Z] Copying: 710/1024 [MB] (26 MBps) [2024-12-09T23:24:15.740Z] Copying: 733/1024 [MB] (23 MBps) [2024-12-09T23:24:16.681Z] Copying: 756/1024 [MB] (22 MBps) [2024-12-09T23:24:18.079Z] Copying: 785/1024 [MB] (28 MBps) [2024-12-09T23:24:19.019Z] Copying: 808/1024 [MB] (23 MBps) [2024-12-09T23:24:19.958Z] Copying: 836/1024 [MB] (28 MBps) [2024-12-09T23:24:20.897Z] Copying: 867/1024 [MB] (30 MBps) [2024-12-09T23:24:21.839Z] Copying: 897/1024 [MB] (29 MBps) [2024-12-09T23:24:22.778Z] Copying: 926/1024 [MB] (29 MBps) [2024-12-09T23:24:23.710Z] Copying: 954/1024 [MB] (27 MBps) [2024-12-09T23:24:25.091Z] Copying: 986/1024 [MB] (32 MBps) [2024-12-09T23:24:25.349Z] Copying: 1009/1024 [MB] (23 MBps) [2024-12-09T23:24:25.608Z] Copying: 1024/1024 [MB] (average 33 MBps)[2024-12-09 23:24:25.518514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:47.146 [2024-12-09 23:24:25.518883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:42:47.146 [2024-12-09 23:24:25.519092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:42:47.146 [2024-12-09 23:24:25.519125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:47.146 [2024-12-09 23:24:25.519184] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:42:47.146 [2024-12-09 23:24:25.522911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:47.146 [2024-12-09 23:24:25.523254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:42:47.146 [2024-12-09 23:24:25.523279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.458 ms 00:42:47.146 [2024-12-09 23:24:25.523288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:47.146 [2024-12-09 23:24:25.523549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:47.146 [2024-12-09 23:24:25.523568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:42:47.146 [2024-12-09 23:24:25.523579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.227 ms 00:42:47.146 [2024-12-09 23:24:25.523588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:47.146 [2024-12-09 23:24:25.538389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:47.146 [2024-12-09 23:24:25.538428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:42:47.146 [2024-12-09 23:24:25.538441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.782 ms 00:42:47.146 [2024-12-09 23:24:25.538451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:47.146 [2024-12-09 23:24:25.544651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:47.146 [2024-12-09 23:24:25.544687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:42:47.146 [2024-12-09 23:24:25.544708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.166 ms 00:42:47.146 [2024-12-09 23:24:25.544719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:47.146 [2024-12-09 23:24:25.572610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:47.146 [2024-12-09 23:24:25.572651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:42:47.146 [2024-12-09 23:24:25.572664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.839 ms 00:42:47.146 [2024-12-09 23:24:25.572673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:47.146 [2024-12-09 23:24:25.589925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:47.146 [2024-12-09 23:24:25.589968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:42:47.146 [2024-12-09 23:24:25.589983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.204 ms 00:42:47.146 [2024-12-09 23:24:25.589993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:47.146 [2024-12-09 23:24:25.595585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:47.147 [2024-12-09 23:24:25.595778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:42:47.147 [2024-12-09 23:24:25.595801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.538 ms 00:42:47.147 [2024-12-09 23:24:25.595819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:47.409 [2024-12-09 23:24:25.621880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:47.409 [2024-12-09 23:24:25.621919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:42:47.409 [2024-12-09 23:24:25.621932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.034 ms 00:42:47.409 [2024-12-09 23:24:25.621941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:47.409 [2024-12-09 23:24:25.647392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:47.409 [2024-12-09 23:24:25.647428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:42:47.409 [2024-12-09 23:24:25.647440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.403 ms 00:42:47.409 [2024-12-09 23:24:25.647448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:47.409 [2024-12-09 23:24:25.672427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:47.409 [2024-12-09 23:24:25.672466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:42:47.409 [2024-12-09 23:24:25.672478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.931 ms 00:42:47.409 [2024-12-09 23:24:25.672488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:47.409 [2024-12-09 23:24:25.697388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:47.409 [2024-12-09 23:24:25.697595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:42:47.409 [2024-12-09 23:24:25.697617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.683 ms 00:42:47.409 [2024-12-09 23:24:25.697626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:47.409 [2024-12-09 23:24:25.697663] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:42:47.409 [2024-12-09 23:24:25.697682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:42:47.409 [2024-12-09 23:24:25.697694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:42:47.409 [2024-12-09 23:24:25.697704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:42:47.409 [2024-12-09 23:24:25.697713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:42:47.409 [2024-12-09 23:24:25.697721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:42:47.409 [2024-12-09 23:24:25.697728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:42:47.409 [2024-12-09 23:24:25.697737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:42:47.409 [2024-12-09 23:24:25.697745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:42:47.409 [2024-12-09 23:24:25.697754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:42:47.409 [2024-12-09 23:24:25.697763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:42:47.409 [2024-12-09 23:24:25.697772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:42:47.409 [2024-12-09 23:24:25.697780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:42:47.409 [2024-12-09 23:24:25.697787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:42:47.409 [2024-12-09 23:24:25.697795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:42:47.409 [2024-12-09 23:24:25.697803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:42:47.409 [2024-12-09 23:24:25.697812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:42:47.409 [2024-12-09 23:24:25.697822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:42:47.409 [2024-12-09 23:24:25.697830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:42:47.409 [2024-12-09 23:24:25.697838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:42:47.409 [2024-12-09 23:24:25.697846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:42:47.409 [2024-12-09 23:24:25.697854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:42:47.409 [2024-12-09 23:24:25.697862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:42:47.409 [2024-12-09 23:24:25.697870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:42:47.409 [2024-12-09 23:24:25.697877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:42:47.409 [2024-12-09 23:24:25.697885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:42:47.409 [2024-12-09 23:24:25.697893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:42:47.409 [2024-12-09 23:24:25.697901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:42:47.409 [2024-12-09 23:24:25.697911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:42:47.409 [2024-12-09 23:24:25.697919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:42:47.409 [2024-12-09 23:24:25.697926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:42:47.409 [2024-12-09 23:24:25.697934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:42:47.409 [2024-12-09 23:24:25.697942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:42:47.409 [2024-12-09 23:24:25.697949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:42:47.409 [2024-12-09 23:24:25.697956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:42:47.409 [2024-12-09 23:24:25.697964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:42:47.409 [2024-12-09 23:24:25.697971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:42:47.409 [2024-12-09 23:24:25.697979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:42:47.409 [2024-12-09 23:24:25.697987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:42:47.409 [2024-12-09 23:24:25.697994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:42:47.409 [2024-12-09 23:24:25.698002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:42:47.409 [2024-12-09 23:24:25.698010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:42:47.409 [2024-12-09 23:24:25.698018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:42:47.409 [2024-12-09 23:24:25.698026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:42:47.409 [2024-12-09 23:24:25.698034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:42:47.409 [2024-12-09 23:24:25.698042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:42:47.409 [2024-12-09 23:24:25.698049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:42:47.409 [2024-12-09 23:24:25.698057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:42:47.409 [2024-12-09 23:24:25.698064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:42:47.409 [2024-12-09 23:24:25.698073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:42:47.409 [2024-12-09 23:24:25.698082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:42:47.409 [2024-12-09 23:24:25.698091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:42:47.409 [2024-12-09 23:24:25.698100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:42:47.409 [2024-12-09 23:24:25.698109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:42:47.409 [2024-12-09 23:24:25.698116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:42:47.409 [2024-12-09 23:24:25.698124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:42:47.409 [2024-12-09 23:24:25.698132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:42:47.409 [2024-12-09 23:24:25.698141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:42:47.409 [2024-12-09 23:24:25.698148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:42:47.409 [2024-12-09 23:24:25.698156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:42:47.409 [2024-12-09 23:24:25.698164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:42:47.409 [2024-12-09 23:24:25.698172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:42:47.409 [2024-12-09 23:24:25.698179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:42:47.410 [2024-12-09 23:24:25.698186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:42:47.410 [2024-12-09 23:24:25.698193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:42:47.410 [2024-12-09 23:24:25.698201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:42:47.410 [2024-12-09 23:24:25.698208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:42:47.410 [2024-12-09 23:24:25.698243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:42:47.410 [2024-12-09 23:24:25.698252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:42:47.410 [2024-12-09 23:24:25.698260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:42:47.410 [2024-12-09 23:24:25.698268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:42:47.410 [2024-12-09 23:24:25.698276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:42:47.410 [2024-12-09 23:24:25.698284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:42:47.410 [2024-12-09 23:24:25.698294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:42:47.410 [2024-12-09 23:24:25.698305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:42:47.410 [2024-12-09 23:24:25.698314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:42:47.410 [2024-12-09 23:24:25.698322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:42:47.410 [2024-12-09 23:24:25.698330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:42:47.410 [2024-12-09 23:24:25.698340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:42:47.410 [2024-12-09 23:24:25.698348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:42:47.410 [2024-12-09 23:24:25.698355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:42:47.410 [2024-12-09 23:24:25.698373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:42:47.410 [2024-12-09 23:24:25.698382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:42:47.410 [2024-12-09 23:24:25.698397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:42:47.410 [2024-12-09 23:24:25.698405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:42:47.410 [2024-12-09 23:24:25.698413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:42:47.410 [2024-12-09 23:24:25.698421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:42:47.410 [2024-12-09 23:24:25.698430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:42:47.410 [2024-12-09 23:24:25.698438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:42:47.410 [2024-12-09 23:24:25.698446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:42:47.410 [2024-12-09 23:24:25.698455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:42:47.410 [2024-12-09 23:24:25.698463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:42:47.410 [2024-12-09 23:24:25.698471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:42:47.410 [2024-12-09 23:24:25.698479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:42:47.410 [2024-12-09 23:24:25.698486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:42:47.410 [2024-12-09 23:24:25.698495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:42:47.410 [2024-12-09 23:24:25.698503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:42:47.410 [2024-12-09 23:24:25.698511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:42:47.410 [2024-12-09 23:24:25.698519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:42:47.410 [2024-12-09 23:24:25.698527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:42:47.410 [2024-12-09 23:24:25.698535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:42:47.410 [2024-12-09 23:24:25.698553] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:42:47.410 [2024-12-09 23:24:25.698561] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 290e4479-0ec0-4dd4-8c2a-b1b4f77d8eec 00:42:47.410 [2024-12-09 23:24:25.698570] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:42:47.410 [2024-12-09 23:24:25.698578] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 137408 00:42:47.410 [2024-12-09 23:24:25.698601] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 135424 00:42:47.410 [2024-12-09 23:24:25.698610] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0147 00:42:47.410 [2024-12-09 23:24:25.698618] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:42:47.410 [2024-12-09 23:24:25.698636] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:42:47.410 [2024-12-09 23:24:25.698645] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:42:47.410 [2024-12-09 23:24:25.698652] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:42:47.410 [2024-12-09 23:24:25.698658] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:42:47.410 [2024-12-09 23:24:25.698667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:47.410 [2024-12-09 23:24:25.698675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:42:47.410 [2024-12-09 23:24:25.698693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.005 ms 00:42:47.410 [2024-12-09 23:24:25.698701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:47.410 [2024-12-09 23:24:25.713355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:47.410 [2024-12-09 23:24:25.713420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:42:47.410 [2024-12-09 23:24:25.713435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.617 ms 00:42:47.410 [2024-12-09 23:24:25.713444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:47.410 [2024-12-09 23:24:25.713866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:47.410 [2024-12-09 23:24:25.713876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:42:47.410 [2024-12-09 23:24:25.713887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.398 ms 00:42:47.410 [2024-12-09 23:24:25.713895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:47.410 [2024-12-09 23:24:25.753583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:47.410 [2024-12-09 23:24:25.753623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:42:47.410 [2024-12-09 23:24:25.753636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:47.410 [2024-12-09 23:24:25.753645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:47.410 [2024-12-09 23:24:25.753716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:47.410 [2024-12-09 23:24:25.753727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:42:47.410 [2024-12-09 23:24:25.753736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:47.410 [2024-12-09 23:24:25.753745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:47.410 [2024-12-09 23:24:25.753842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:47.410 [2024-12-09 23:24:25.753853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:42:47.410 [2024-12-09 23:24:25.753862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:47.410 [2024-12-09 23:24:25.753871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:47.410 [2024-12-09 23:24:25.753895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:47.410 [2024-12-09 23:24:25.753905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:42:47.410 [2024-12-09 23:24:25.753913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:47.410 [2024-12-09 23:24:25.753922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:47.410 [2024-12-09 23:24:25.845400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:47.410 [2024-12-09 23:24:25.845458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:42:47.410 [2024-12-09 23:24:25.845472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:47.410 [2024-12-09 23:24:25.845482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:47.672 [2024-12-09 23:24:25.919472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:47.672 [2024-12-09 23:24:25.919541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:42:47.672 [2024-12-09 23:24:25.919562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:47.672 [2024-12-09 23:24:25.919572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:47.672 [2024-12-09 23:24:25.919655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:47.672 [2024-12-09 23:24:25.919673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:42:47.672 [2024-12-09 23:24:25.919683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:47.672 [2024-12-09 23:24:25.919692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:47.672 [2024-12-09 23:24:25.919764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:47.672 [2024-12-09 23:24:25.919776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:42:47.672 [2024-12-09 23:24:25.919785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:47.672 [2024-12-09 23:24:25.919794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:47.672 [2024-12-09 23:24:25.919907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:47.672 [2024-12-09 23:24:25.919919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:42:47.672 [2024-12-09 23:24:25.919932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:47.672 [2024-12-09 23:24:25.919941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:47.672 [2024-12-09 23:24:25.919983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:47.672 [2024-12-09 23:24:25.919995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:42:47.672 [2024-12-09 23:24:25.920003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:47.672 [2024-12-09 23:24:25.920012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:47.672 [2024-12-09 23:24:25.920068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:47.672 [2024-12-09 23:24:25.920079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:42:47.672 [2024-12-09 23:24:25.920093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:47.672 [2024-12-09 23:24:25.920102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:47.672 [2024-12-09 23:24:25.920165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:47.672 [2024-12-09 23:24:25.920177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:42:47.672 [2024-12-09 23:24:25.920185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:47.672 [2024-12-09 23:24:25.920194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:47.672 [2024-12-09 23:24:25.920435] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 401.842 ms, result 0 00:42:48.613 00:42:48.613 00:42:48.613 23:24:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:42:51.153 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:42:51.153 23:24:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:42:51.153 [2024-12-09 23:24:29.082631] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:42:51.153 [2024-12-09 23:24:29.082753] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81601 ] 00:42:51.153 [2024-12-09 23:24:29.244488] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:51.153 [2024-12-09 23:24:29.354269] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:51.414 [2024-12-09 23:24:29.649590] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:42:51.414 [2024-12-09 23:24:29.649948] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:42:51.414 [2024-12-09 23:24:29.815154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:51.414 [2024-12-09 23:24:29.815234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:42:51.414 [2024-12-09 23:24:29.815251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:42:51.414 [2024-12-09 23:24:29.815261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:51.414 [2024-12-09 23:24:29.815320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:51.414 [2024-12-09 23:24:29.815335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:42:51.414 [2024-12-09 23:24:29.815346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:42:51.414 [2024-12-09 23:24:29.815354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:51.414 [2024-12-09 23:24:29.815376] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:42:51.414 [2024-12-09 23:24:29.816073] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:42:51.414 [2024-12-09 23:24:29.816105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:51.414 [2024-12-09 23:24:29.816115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:42:51.414 [2024-12-09 23:24:29.816124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.734 ms 00:42:51.414 [2024-12-09 23:24:29.816132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:51.414 [2024-12-09 23:24:29.818214] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:42:51.414 [2024-12-09 23:24:29.833164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:51.414 [2024-12-09 23:24:29.833213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:42:51.414 [2024-12-09 23:24:29.833245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.952 ms 00:42:51.414 [2024-12-09 23:24:29.833255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:51.414 [2024-12-09 23:24:29.833340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:51.414 [2024-12-09 23:24:29.833352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:42:51.414 [2024-12-09 23:24:29.833362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:42:51.414 [2024-12-09 23:24:29.833384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:51.414 [2024-12-09 23:24:29.844059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:51.414 [2024-12-09 23:24:29.844300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:42:51.414 [2024-12-09 23:24:29.844322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.593 ms 00:42:51.414 [2024-12-09 23:24:29.844338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:51.414 [2024-12-09 23:24:29.844425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:51.414 [2024-12-09 23:24:29.844436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:42:51.414 [2024-12-09 23:24:29.844446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:42:51.414 [2024-12-09 23:24:29.844455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:51.414 [2024-12-09 23:24:29.844518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:51.414 [2024-12-09 23:24:29.844530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:42:51.414 [2024-12-09 23:24:29.844540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:42:51.414 [2024-12-09 23:24:29.844548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:51.414 [2024-12-09 23:24:29.844577] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:42:51.414 [2024-12-09 23:24:29.849186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:51.414 [2024-12-09 23:24:29.849241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:42:51.414 [2024-12-09 23:24:29.849258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.617 ms 00:42:51.414 [2024-12-09 23:24:29.849267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:51.414 [2024-12-09 23:24:29.849307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:51.414 [2024-12-09 23:24:29.849319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:42:51.414 [2024-12-09 23:24:29.849330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:42:51.414 [2024-12-09 23:24:29.849338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:51.414 [2024-12-09 23:24:29.849405] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:42:51.414 [2024-12-09 23:24:29.849437] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:42:51.414 [2024-12-09 23:24:29.849478] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:42:51.414 [2024-12-09 23:24:29.849501] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:42:51.414 [2024-12-09 23:24:29.849613] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:42:51.414 [2024-12-09 23:24:29.849625] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:42:51.414 [2024-12-09 23:24:29.849637] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:42:51.414 [2024-12-09 23:24:29.849648] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:42:51.414 [2024-12-09 23:24:29.849658] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:42:51.414 [2024-12-09 23:24:29.849667] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:42:51.414 [2024-12-09 23:24:29.849675] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:42:51.414 [2024-12-09 23:24:29.849687] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:42:51.414 [2024-12-09 23:24:29.849697] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:42:51.414 [2024-12-09 23:24:29.849707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:51.414 [2024-12-09 23:24:29.849716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:42:51.414 [2024-12-09 23:24:29.849725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.306 ms 00:42:51.414 [2024-12-09 23:24:29.849733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:51.414 [2024-12-09 23:24:29.849822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:51.414 [2024-12-09 23:24:29.849833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:42:51.414 [2024-12-09 23:24:29.849841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:42:51.414 [2024-12-09 23:24:29.849849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:51.414 [2024-12-09 23:24:29.849951] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:42:51.414 [2024-12-09 23:24:29.849973] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:42:51.414 [2024-12-09 23:24:29.849983] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:42:51.414 [2024-12-09 23:24:29.849992] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:42:51.414 [2024-12-09 23:24:29.850002] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:42:51.414 [2024-12-09 23:24:29.850009] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:42:51.414 [2024-12-09 23:24:29.850017] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:42:51.415 [2024-12-09 23:24:29.850024] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:42:51.415 [2024-12-09 23:24:29.850031] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:42:51.415 [2024-12-09 23:24:29.850038] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:42:51.415 [2024-12-09 23:24:29.850050] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:42:51.415 [2024-12-09 23:24:29.850057] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:42:51.415 [2024-12-09 23:24:29.850065] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:42:51.415 [2024-12-09 23:24:29.850082] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:42:51.415 [2024-12-09 23:24:29.850090] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:42:51.415 [2024-12-09 23:24:29.850097] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:42:51.415 [2024-12-09 23:24:29.850104] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:42:51.415 [2024-12-09 23:24:29.850111] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:42:51.415 [2024-12-09 23:24:29.850118] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:42:51.415 [2024-12-09 23:24:29.850125] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:42:51.415 [2024-12-09 23:24:29.850132] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:42:51.415 [2024-12-09 23:24:29.850140] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:42:51.415 [2024-12-09 23:24:29.850147] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:42:51.415 [2024-12-09 23:24:29.850155] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:42:51.415 [2024-12-09 23:24:29.850162] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:42:51.415 [2024-12-09 23:24:29.850169] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:42:51.415 [2024-12-09 23:24:29.850176] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:42:51.415 [2024-12-09 23:24:29.850183] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:42:51.415 [2024-12-09 23:24:29.850189] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:42:51.415 [2024-12-09 23:24:29.850197] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:42:51.415 [2024-12-09 23:24:29.850204] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:42:51.415 [2024-12-09 23:24:29.850212] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:42:51.415 [2024-12-09 23:24:29.850236] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:42:51.415 [2024-12-09 23:24:29.850243] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:42:51.415 [2024-12-09 23:24:29.850250] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:42:51.415 [2024-12-09 23:24:29.850256] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:42:51.415 [2024-12-09 23:24:29.850263] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:42:51.415 [2024-12-09 23:24:29.850272] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:42:51.415 [2024-12-09 23:24:29.850279] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:42:51.415 [2024-12-09 23:24:29.850286] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:42:51.415 [2024-12-09 23:24:29.850292] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:42:51.415 [2024-12-09 23:24:29.850299] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:42:51.415 [2024-12-09 23:24:29.850307] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:42:51.415 [2024-12-09 23:24:29.850314] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:42:51.415 [2024-12-09 23:24:29.850323] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:42:51.415 [2024-12-09 23:24:29.850331] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:42:51.415 [2024-12-09 23:24:29.850339] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:42:51.415 [2024-12-09 23:24:29.850347] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:42:51.415 [2024-12-09 23:24:29.850355] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:42:51.415 [2024-12-09 23:24:29.850362] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:42:51.415 [2024-12-09 23:24:29.850370] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:42:51.415 [2024-12-09 23:24:29.850378] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:42:51.415 [2024-12-09 23:24:29.850385] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:42:51.415 [2024-12-09 23:24:29.850394] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:42:51.415 [2024-12-09 23:24:29.850404] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:42:51.415 [2024-12-09 23:24:29.850417] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:42:51.415 [2024-12-09 23:24:29.850424] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:42:51.415 [2024-12-09 23:24:29.850432] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:42:51.415 [2024-12-09 23:24:29.850440] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:42:51.415 [2024-12-09 23:24:29.850448] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:42:51.415 [2024-12-09 23:24:29.850456] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:42:51.415 [2024-12-09 23:24:29.850463] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:42:51.415 [2024-12-09 23:24:29.850470] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:42:51.415 [2024-12-09 23:24:29.850478] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:42:51.415 [2024-12-09 23:24:29.850486] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:42:51.415 [2024-12-09 23:24:29.850494] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:42:51.415 [2024-12-09 23:24:29.850501] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:42:51.415 [2024-12-09 23:24:29.850510] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:42:51.415 [2024-12-09 23:24:29.850519] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:42:51.415 [2024-12-09 23:24:29.850527] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:42:51.415 [2024-12-09 23:24:29.850535] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:42:51.415 [2024-12-09 23:24:29.850543] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:42:51.415 [2024-12-09 23:24:29.850551] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:42:51.415 [2024-12-09 23:24:29.850558] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:42:51.415 [2024-12-09 23:24:29.850567] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:42:51.415 [2024-12-09 23:24:29.850574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:51.415 [2024-12-09 23:24:29.850584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:42:51.415 [2024-12-09 23:24:29.850593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.693 ms 00:42:51.415 [2024-12-09 23:24:29.850601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:51.677 [2024-12-09 23:24:29.887391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:51.677 [2024-12-09 23:24:29.887441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:42:51.677 [2024-12-09 23:24:29.887455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.742 ms 00:42:51.677 [2024-12-09 23:24:29.887469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:51.677 [2024-12-09 23:24:29.887566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:51.677 [2024-12-09 23:24:29.887576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:42:51.677 [2024-12-09 23:24:29.887585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:42:51.677 [2024-12-09 23:24:29.887594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:51.677 [2024-12-09 23:24:29.934642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:51.677 [2024-12-09 23:24:29.934698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:42:51.677 [2024-12-09 23:24:29.934711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.984 ms 00:42:51.677 [2024-12-09 23:24:29.934721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:51.677 [2024-12-09 23:24:29.934782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:51.677 [2024-12-09 23:24:29.934794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:42:51.677 [2024-12-09 23:24:29.934809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:42:51.677 [2024-12-09 23:24:29.934818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:51.677 [2024-12-09 23:24:29.935582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:51.677 [2024-12-09 23:24:29.935616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:42:51.677 [2024-12-09 23:24:29.935629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.680 ms 00:42:51.677 [2024-12-09 23:24:29.935637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:51.677 [2024-12-09 23:24:29.935814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:51.677 [2024-12-09 23:24:29.935835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:42:51.677 [2024-12-09 23:24:29.935849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.143 ms 00:42:51.677 [2024-12-09 23:24:29.935857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:51.677 [2024-12-09 23:24:29.953911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:51.677 [2024-12-09 23:24:29.953956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:42:51.677 [2024-12-09 23:24:29.953968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.031 ms 00:42:51.677 [2024-12-09 23:24:29.953977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:51.677 [2024-12-09 23:24:29.969644] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:42:51.677 [2024-12-09 23:24:29.969692] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:42:51.677 [2024-12-09 23:24:29.969707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:51.677 [2024-12-09 23:24:29.969717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:42:51.677 [2024-12-09 23:24:29.969728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.610 ms 00:42:51.677 [2024-12-09 23:24:29.969737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:51.677 [2024-12-09 23:24:29.996574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:51.677 [2024-12-09 23:24:29.996624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:42:51.677 [2024-12-09 23:24:29.996639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.780 ms 00:42:51.677 [2024-12-09 23:24:29.996648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:51.677 [2024-12-09 23:24:30.009593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:51.677 [2024-12-09 23:24:30.009640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:42:51.677 [2024-12-09 23:24:30.009653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.878 ms 00:42:51.677 [2024-12-09 23:24:30.009661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:51.677 [2024-12-09 23:24:30.022639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:51.677 [2024-12-09 23:24:30.022686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:42:51.677 [2024-12-09 23:24:30.022699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.928 ms 00:42:51.677 [2024-12-09 23:24:30.022708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:51.677 [2024-12-09 23:24:30.023470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:51.677 [2024-12-09 23:24:30.023502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:42:51.677 [2024-12-09 23:24:30.023520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.646 ms 00:42:51.677 [2024-12-09 23:24:30.023529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:51.677 [2024-12-09 23:24:30.098437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:51.677 [2024-12-09 23:24:30.098772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:42:51.677 [2024-12-09 23:24:30.098808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 74.884 ms 00:42:51.677 [2024-12-09 23:24:30.098819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:51.677 [2024-12-09 23:24:30.112013] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:42:51.677 [2024-12-09 23:24:30.116516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:51.677 [2024-12-09 23:24:30.116560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:42:51.677 [2024-12-09 23:24:30.116577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.633 ms 00:42:51.677 [2024-12-09 23:24:30.116587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:51.677 [2024-12-09 23:24:30.116714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:51.677 [2024-12-09 23:24:30.116732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:42:51.677 [2024-12-09 23:24:30.116746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:42:51.677 [2024-12-09 23:24:30.116755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:51.677 [2024-12-09 23:24:30.117920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:51.677 [2024-12-09 23:24:30.118111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:42:51.677 [2024-12-09 23:24:30.118131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.125 ms 00:42:51.677 [2024-12-09 23:24:30.118141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:51.677 [2024-12-09 23:24:30.118187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:51.677 [2024-12-09 23:24:30.118200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:42:51.677 [2024-12-09 23:24:30.118211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:42:51.677 [2024-12-09 23:24:30.118246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:51.677 [2024-12-09 23:24:30.118299] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:42:51.677 [2024-12-09 23:24:30.118312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:51.677 [2024-12-09 23:24:30.118322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:42:51.677 [2024-12-09 23:24:30.118332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:42:51.677 [2024-12-09 23:24:30.118342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:51.939 [2024-12-09 23:24:30.145825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:51.939 [2024-12-09 23:24:30.145876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:42:51.939 [2024-12-09 23:24:30.145897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.459 ms 00:42:51.939 [2024-12-09 23:24:30.145906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:51.939 [2024-12-09 23:24:30.146000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:51.939 [2024-12-09 23:24:30.146013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:42:51.939 [2024-12-09 23:24:30.146024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:42:51.939 [2024-12-09 23:24:30.146035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:51.939 [2024-12-09 23:24:30.147668] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 331.883 ms, result 0 00:42:52.879  [2024-12-09T23:24:32.721Z] Copying: 16/1024 [MB] (16 MBps) [2024-12-09T23:24:33.663Z] Copying: 26416/1048576 [kB] (9864 kBps) [2024-12-09T23:24:34.668Z] Copying: 38/1024 [MB] (12 MBps) [2024-12-09T23:24:35.645Z] Copying: 49060/1048576 [kB] (9832 kBps) [2024-12-09T23:24:36.596Z] Copying: 58/1024 [MB] (10 MBps) [2024-12-09T23:24:37.538Z] Copying: 73/1024 [MB] (14 MBps) [2024-12-09T23:24:38.479Z] Copying: 93/1024 [MB] (20 MBps) [2024-12-09T23:24:39.412Z] Copying: 111/1024 [MB] (17 MBps) [2024-12-09T23:24:40.350Z] Copying: 127/1024 [MB] (16 MBps) [2024-12-09T23:24:41.727Z] Copying: 140/1024 [MB] (13 MBps) [2024-12-09T23:24:42.676Z] Copying: 152/1024 [MB] (11 MBps) [2024-12-09T23:24:43.618Z] Copying: 163/1024 [MB] (11 MBps) [2024-12-09T23:24:44.550Z] Copying: 179/1024 [MB] (16 MBps) [2024-12-09T23:24:45.488Z] Copying: 201/1024 [MB] (21 MBps) [2024-12-09T23:24:46.430Z] Copying: 224/1024 [MB] (22 MBps) [2024-12-09T23:24:47.370Z] Copying: 235/1024 [MB] (11 MBps) [2024-12-09T23:24:48.762Z] Copying: 256/1024 [MB] (20 MBps) [2024-12-09T23:24:49.343Z] Copying: 279/1024 [MB] (22 MBps) [2024-12-09T23:24:50.727Z] Copying: 303/1024 [MB] (24 MBps) [2024-12-09T23:24:51.667Z] Copying: 337/1024 [MB] (33 MBps) [2024-12-09T23:24:52.610Z] Copying: 373/1024 [MB] (36 MBps) [2024-12-09T23:24:53.552Z] Copying: 397/1024 [MB] (24 MBps) [2024-12-09T23:24:54.496Z] Copying: 428/1024 [MB] (31 MBps) [2024-12-09T23:24:55.439Z] Copying: 465/1024 [MB] (36 MBps) [2024-12-09T23:24:56.380Z] Copying: 487/1024 [MB] (21 MBps) [2024-12-09T23:24:57.768Z] Copying: 502/1024 [MB] (15 MBps) [2024-12-09T23:24:58.340Z] Copying: 517/1024 [MB] (15 MBps) [2024-12-09T23:24:59.725Z] Copying: 535/1024 [MB] (17 MBps) [2024-12-09T23:25:00.746Z] Copying: 557/1024 [MB] (22 MBps) [2024-12-09T23:25:01.701Z] Copying: 594/1024 [MB] (36 MBps) [2024-12-09T23:25:02.641Z] Copying: 637/1024 [MB] (42 MBps) [2024-12-09T23:25:03.584Z] Copying: 678/1024 [MB] (41 MBps) [2024-12-09T23:25:04.527Z] Copying: 698/1024 [MB] (19 MBps) [2024-12-09T23:25:05.471Z] Copying: 716/1024 [MB] (18 MBps) [2024-12-09T23:25:06.414Z] Copying: 736/1024 [MB] (19 MBps) [2024-12-09T23:25:07.358Z] Copying: 756/1024 [MB] (20 MBps) [2024-12-09T23:25:08.745Z] Copying: 778/1024 [MB] (21 MBps) [2024-12-09T23:25:09.684Z] Copying: 804/1024 [MB] (26 MBps) [2024-12-09T23:25:10.625Z] Copying: 827/1024 [MB] (22 MBps) [2024-12-09T23:25:11.564Z] Copying: 843/1024 [MB] (15 MBps) [2024-12-09T23:25:12.508Z] Copying: 861/1024 [MB] (18 MBps) [2024-12-09T23:25:13.451Z] Copying: 879/1024 [MB] (18 MBps) [2024-12-09T23:25:14.393Z] Copying: 898/1024 [MB] (19 MBps) [2024-12-09T23:25:15.334Z] Copying: 914/1024 [MB] (15 MBps) [2024-12-09T23:25:16.735Z] Copying: 932/1024 [MB] (17 MBps) [2024-12-09T23:25:17.677Z] Copying: 951/1024 [MB] (18 MBps) [2024-12-09T23:25:18.617Z] Copying: 969/1024 [MB] (18 MBps) [2024-12-09T23:25:19.557Z] Copying: 1000/1024 [MB] (31 MBps) [2024-12-09T23:25:19.557Z] Copying: 1023/1024 [MB] (22 MBps) [2024-12-09T23:25:19.557Z] Copying: 1024/1024 [MB] (average 20 MBps)[2024-12-09 23:25:19.502710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:41.095 [2024-12-09 23:25:19.502812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:43:41.095 [2024-12-09 23:25:19.502843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:43:41.095 [2024-12-09 23:25:19.502861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:41.095 [2024-12-09 23:25:19.502907] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:43:41.095 [2024-12-09 23:25:19.509353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:41.095 [2024-12-09 23:25:19.509428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:43:41.095 [2024-12-09 23:25:19.509451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.415 ms 00:43:41.095 [2024-12-09 23:25:19.509469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:41.095 [2024-12-09 23:25:19.509947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:41.095 [2024-12-09 23:25:19.509969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:43:41.095 [2024-12-09 23:25:19.509987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.436 ms 00:43:41.095 [2024-12-09 23:25:19.510006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:41.095 [2024-12-09 23:25:19.515744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:41.095 [2024-12-09 23:25:19.515907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:43:41.095 [2024-12-09 23:25:19.515923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.710 ms 00:43:41.095 [2024-12-09 23:25:19.515937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:41.095 [2024-12-09 23:25:19.522063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:41.095 [2024-12-09 23:25:19.522092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:43:41.095 [2024-12-09 23:25:19.522102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.106 ms 00:43:41.095 [2024-12-09 23:25:19.522111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:41.095 [2024-12-09 23:25:19.547826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:41.095 [2024-12-09 23:25:19.547861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:43:41.095 [2024-12-09 23:25:19.547873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.660 ms 00:43:41.095 [2024-12-09 23:25:19.547881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:41.357 [2024-12-09 23:25:19.562555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:41.357 [2024-12-09 23:25:19.562589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:43:41.357 [2024-12-09 23:25:19.562601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.638 ms 00:43:41.357 [2024-12-09 23:25:19.562609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:41.357 [2024-12-09 23:25:19.567114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:41.357 [2024-12-09 23:25:19.567146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:43:41.357 [2024-12-09 23:25:19.567156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.464 ms 00:43:41.357 [2024-12-09 23:25:19.567164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:41.357 [2024-12-09 23:25:19.591594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:41.358 [2024-12-09 23:25:19.591625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:43:41.358 [2024-12-09 23:25:19.591636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.415 ms 00:43:41.358 [2024-12-09 23:25:19.591643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:41.358 [2024-12-09 23:25:19.615418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:41.358 [2024-12-09 23:25:19.615448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:43:41.358 [2024-12-09 23:25:19.615458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.742 ms 00:43:41.358 [2024-12-09 23:25:19.615466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:41.358 [2024-12-09 23:25:19.638215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:41.358 [2024-12-09 23:25:19.638258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:43:41.358 [2024-12-09 23:25:19.638268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.716 ms 00:43:41.358 [2024-12-09 23:25:19.638276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:41.358 [2024-12-09 23:25:19.661520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:41.358 [2024-12-09 23:25:19.661669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:43:41.358 [2024-12-09 23:25:19.661686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.188 ms 00:43:41.358 [2024-12-09 23:25:19.661693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:41.358 [2024-12-09 23:25:19.661723] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:43:41.358 [2024-12-09 23:25:19.661745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:43:41.358 [2024-12-09 23:25:19.661758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:43:41.358 [2024-12-09 23:25:19.661767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:43:41.358 [2024-12-09 23:25:19.661775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:43:41.358 [2024-12-09 23:25:19.661783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:43:41.358 [2024-12-09 23:25:19.661790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:43:41.358 [2024-12-09 23:25:19.661798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:43:41.358 [2024-12-09 23:25:19.661805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:43:41.358 [2024-12-09 23:25:19.661813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:43:41.358 [2024-12-09 23:25:19.661820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:43:41.358 [2024-12-09 23:25:19.661828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:43:41.358 [2024-12-09 23:25:19.661836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:43:41.358 [2024-12-09 23:25:19.661845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:43:41.358 [2024-12-09 23:25:19.661852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:43:41.358 [2024-12-09 23:25:19.661860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:43:41.358 [2024-12-09 23:25:19.661867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:43:41.358 [2024-12-09 23:25:19.661876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:43:41.358 [2024-12-09 23:25:19.661883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:43:41.358 [2024-12-09 23:25:19.661890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:43:41.358 [2024-12-09 23:25:19.661897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:43:41.358 [2024-12-09 23:25:19.661905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:43:41.358 [2024-12-09 23:25:19.661912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:43:41.358 [2024-12-09 23:25:19.661919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:43:41.358 [2024-12-09 23:25:19.661927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:43:41.358 [2024-12-09 23:25:19.661934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:43:41.358 [2024-12-09 23:25:19.661942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:43:41.358 [2024-12-09 23:25:19.661950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:43:41.358 [2024-12-09 23:25:19.661959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:43:41.358 [2024-12-09 23:25:19.661967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:43:41.358 [2024-12-09 23:25:19.661976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:43:41.358 [2024-12-09 23:25:19.661984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:43:41.358 [2024-12-09 23:25:19.661991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:43:41.358 [2024-12-09 23:25:19.661998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:43:41.358 [2024-12-09 23:25:19.662006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:43:41.358 [2024-12-09 23:25:19.662014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:43:41.358 [2024-12-09 23:25:19.662022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:43:41.358 [2024-12-09 23:25:19.662030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:43:41.358 [2024-12-09 23:25:19.662037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:43:41.358 [2024-12-09 23:25:19.662045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:43:41.358 [2024-12-09 23:25:19.662052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:43:41.358 [2024-12-09 23:25:19.662060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:43:41.358 [2024-12-09 23:25:19.662067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:43:41.358 [2024-12-09 23:25:19.662076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:43:41.358 [2024-12-09 23:25:19.662083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:43:41.358 [2024-12-09 23:25:19.662090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:43:41.358 [2024-12-09 23:25:19.662097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:43:41.358 [2024-12-09 23:25:19.662105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:43:41.358 [2024-12-09 23:25:19.662112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:43:41.358 [2024-12-09 23:25:19.662120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:43:41.358 [2024-12-09 23:25:19.662127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:43:41.358 [2024-12-09 23:25:19.662135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:43:41.358 [2024-12-09 23:25:19.662142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:43:41.358 [2024-12-09 23:25:19.662150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:43:41.358 [2024-12-09 23:25:19.662157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:43:41.358 [2024-12-09 23:25:19.662164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:43:41.358 [2024-12-09 23:25:19.662171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:43:41.358 [2024-12-09 23:25:19.662179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:43:41.358 [2024-12-09 23:25:19.662186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:43:41.358 [2024-12-09 23:25:19.662193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:43:41.358 [2024-12-09 23:25:19.662203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:43:41.358 [2024-12-09 23:25:19.662210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:43:41.358 [2024-12-09 23:25:19.662242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:43:41.358 [2024-12-09 23:25:19.662250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:43:41.358 [2024-12-09 23:25:19.662258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:43:41.358 [2024-12-09 23:25:19.662266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:43:41.358 [2024-12-09 23:25:19.662273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:43:41.358 [2024-12-09 23:25:19.662281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:43:41.358 [2024-12-09 23:25:19.662289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:43:41.358 [2024-12-09 23:25:19.662296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:43:41.358 [2024-12-09 23:25:19.662305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:43:41.358 [2024-12-09 23:25:19.662312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:43:41.358 [2024-12-09 23:25:19.662320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:43:41.358 [2024-12-09 23:25:19.662328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:43:41.358 [2024-12-09 23:25:19.662335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:43:41.358 [2024-12-09 23:25:19.662344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:43:41.358 [2024-12-09 23:25:19.662351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:43:41.358 [2024-12-09 23:25:19.662359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:43:41.359 [2024-12-09 23:25:19.662367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:43:41.359 [2024-12-09 23:25:19.662375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:43:41.359 [2024-12-09 23:25:19.662383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:43:41.359 [2024-12-09 23:25:19.662391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:43:41.359 [2024-12-09 23:25:19.662398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:43:41.359 [2024-12-09 23:25:19.662406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:43:41.359 [2024-12-09 23:25:19.662413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:43:41.359 [2024-12-09 23:25:19.662421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:43:41.359 [2024-12-09 23:25:19.662429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:43:41.359 [2024-12-09 23:25:19.662436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:43:41.359 [2024-12-09 23:25:19.662444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:43:41.359 [2024-12-09 23:25:19.662451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:43:41.359 [2024-12-09 23:25:19.662459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:43:41.359 [2024-12-09 23:25:19.662466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:43:41.359 [2024-12-09 23:25:19.662474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:43:41.359 [2024-12-09 23:25:19.662482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:43:41.359 [2024-12-09 23:25:19.662490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:43:41.359 [2024-12-09 23:25:19.662497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:43:41.359 [2024-12-09 23:25:19.662505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:43:41.359 [2024-12-09 23:25:19.662513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:43:41.359 [2024-12-09 23:25:19.662522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:43:41.359 [2024-12-09 23:25:19.662530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:43:41.359 [2024-12-09 23:25:19.662550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:43:41.359 [2024-12-09 23:25:19.662566] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:43:41.359 [2024-12-09 23:25:19.662573] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 290e4479-0ec0-4dd4-8c2a-b1b4f77d8eec 00:43:41.359 [2024-12-09 23:25:19.662581] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:43:41.359 [2024-12-09 23:25:19.662589] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:43:41.359 [2024-12-09 23:25:19.662597] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:43:41.359 [2024-12-09 23:25:19.662605] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:43:41.359 [2024-12-09 23:25:19.662618] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:43:41.359 [2024-12-09 23:25:19.662625] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:43:41.359 [2024-12-09 23:25:19.662633] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:43:41.359 [2024-12-09 23:25:19.662640] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:43:41.359 [2024-12-09 23:25:19.662646] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:43:41.359 [2024-12-09 23:25:19.662654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:41.359 [2024-12-09 23:25:19.662662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:43:41.359 [2024-12-09 23:25:19.662671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.932 ms 00:43:41.359 [2024-12-09 23:25:19.662681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:41.359 [2024-12-09 23:25:19.675882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:41.359 [2024-12-09 23:25:19.675913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:43:41.359 [2024-12-09 23:25:19.675926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.183 ms 00:43:41.359 [2024-12-09 23:25:19.675935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:41.359 [2024-12-09 23:25:19.676334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:41.359 [2024-12-09 23:25:19.676365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:43:41.359 [2024-12-09 23:25:19.676374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.368 ms 00:43:41.359 [2024-12-09 23:25:19.676382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:41.359 [2024-12-09 23:25:19.711774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:41.359 [2024-12-09 23:25:19.711918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:43:41.359 [2024-12-09 23:25:19.711935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:41.359 [2024-12-09 23:25:19.711945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:41.359 [2024-12-09 23:25:19.712001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:41.359 [2024-12-09 23:25:19.712015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:43:41.359 [2024-12-09 23:25:19.712024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:41.359 [2024-12-09 23:25:19.712031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:41.359 [2024-12-09 23:25:19.712104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:41.359 [2024-12-09 23:25:19.712116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:43:41.359 [2024-12-09 23:25:19.712124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:41.359 [2024-12-09 23:25:19.712131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:41.359 [2024-12-09 23:25:19.712147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:41.359 [2024-12-09 23:25:19.712154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:43:41.359 [2024-12-09 23:25:19.712166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:41.359 [2024-12-09 23:25:19.712173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:41.359 [2024-12-09 23:25:19.796716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:41.359 [2024-12-09 23:25:19.796777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:43:41.359 [2024-12-09 23:25:19.796792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:41.359 [2024-12-09 23:25:19.796802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:41.621 [2024-12-09 23:25:19.868697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:41.621 [2024-12-09 23:25:19.868766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:43:41.621 [2024-12-09 23:25:19.868780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:41.621 [2024-12-09 23:25:19.868788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:41.621 [2024-12-09 23:25:19.868859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:41.621 [2024-12-09 23:25:19.868869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:43:41.621 [2024-12-09 23:25:19.868879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:41.621 [2024-12-09 23:25:19.868888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:41.621 [2024-12-09 23:25:19.868952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:41.621 [2024-12-09 23:25:19.868965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:43:41.621 [2024-12-09 23:25:19.868974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:41.621 [2024-12-09 23:25:19.868987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:41.621 [2024-12-09 23:25:19.869090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:41.621 [2024-12-09 23:25:19.869103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:43:41.621 [2024-12-09 23:25:19.869113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:41.621 [2024-12-09 23:25:19.869122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:41.621 [2024-12-09 23:25:19.869158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:41.621 [2024-12-09 23:25:19.869169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:43:41.621 [2024-12-09 23:25:19.869178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:41.621 [2024-12-09 23:25:19.869187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:41.621 [2024-12-09 23:25:19.869271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:41.621 [2024-12-09 23:25:19.869283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:43:41.621 [2024-12-09 23:25:19.869292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:41.621 [2024-12-09 23:25:19.869301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:41.621 [2024-12-09 23:25:19.869367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:41.621 [2024-12-09 23:25:19.869379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:43:41.621 [2024-12-09 23:25:19.869388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:41.621 [2024-12-09 23:25:19.869402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:41.621 [2024-12-09 23:25:19.869559] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 366.824 ms, result 0 00:43:42.194 00:43:42.194 00:43:42.194 23:25:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:43:44.742 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:43:44.742 23:25:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:43:44.742 23:25:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:43:44.742 23:25:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:43:44.742 23:25:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:43:44.742 23:25:23 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:43:44.742 23:25:23 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:43:44.742 23:25:23 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:43:44.742 Process with pid 80109 is not found 00:43:44.742 23:25:23 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 80109 00:43:44.742 23:25:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # '[' -z 80109 ']' 00:43:44.742 23:25:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@958 -- # kill -0 80109 00:43:44.742 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (80109) - No such process 00:43:44.742 23:25:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@981 -- # echo 'Process with pid 80109 is not found' 00:43:44.742 23:25:23 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:43:45.315 Remove shared memory files 00:43:45.315 23:25:23 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:43:45.315 23:25:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:43:45.315 23:25:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:43:45.315 23:25:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:43:45.315 23:25:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:43:45.315 23:25:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:43:45.315 23:25:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:43:45.315 ************************************ 00:43:45.315 END TEST ftl_dirty_shutdown 00:43:45.315 ************************************ 00:43:45.315 00:43:45.315 real 3m13.043s 00:43:45.315 user 3m30.051s 00:43:45.315 sys 0m22.736s 00:43:45.315 23:25:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:45.315 23:25:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:43:45.315 23:25:23 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:43:45.315 23:25:23 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:43:45.315 23:25:23 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:45.315 23:25:23 ftl -- common/autotest_common.sh@10 -- # set +x 00:43:45.315 ************************************ 00:43:45.315 START TEST ftl_upgrade_shutdown 00:43:45.315 ************************************ 00:43:45.315 23:25:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:43:45.315 * Looking for test storage... 00:43:45.315 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:43:45.315 23:25:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:43:45.315 23:25:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:43:45.315 23:25:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:43:45.315 23:25:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:43:45.315 23:25:23 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:45.315 23:25:23 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:45.315 23:25:23 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:45.315 23:25:23 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:43:45.315 23:25:23 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:43:45.315 23:25:23 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:43:45.315 23:25:23 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:43:45.315 23:25:23 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:43:45.315 23:25:23 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:43:45.315 23:25:23 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:43:45.315 23:25:23 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:45.315 23:25:23 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:43:45.315 23:25:23 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:43:45.315 23:25:23 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:45.315 23:25:23 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:45.315 23:25:23 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:43:45.315 23:25:23 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:43:45.315 23:25:23 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:45.315 23:25:23 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:43:45.315 23:25:23 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:43:45.315 23:25:23 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:43:45.315 23:25:23 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:43:45.315 23:25:23 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:45.315 23:25:23 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:43:45.315 23:25:23 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:43:45.315 23:25:23 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:45.315 23:25:23 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:45.315 23:25:23 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:43:45.315 23:25:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:45.315 23:25:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:43:45.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:45.315 --rc genhtml_branch_coverage=1 00:43:45.315 --rc genhtml_function_coverage=1 00:43:45.315 --rc genhtml_legend=1 00:43:45.315 --rc geninfo_all_blocks=1 00:43:45.315 --rc geninfo_unexecuted_blocks=1 00:43:45.315 00:43:45.315 ' 00:43:45.315 23:25:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:43:45.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:45.315 --rc genhtml_branch_coverage=1 00:43:45.315 --rc genhtml_function_coverage=1 00:43:45.315 --rc genhtml_legend=1 00:43:45.315 --rc geninfo_all_blocks=1 00:43:45.315 --rc geninfo_unexecuted_blocks=1 00:43:45.315 00:43:45.315 ' 00:43:45.315 23:25:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:43:45.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:45.315 --rc genhtml_branch_coverage=1 00:43:45.315 --rc genhtml_function_coverage=1 00:43:45.315 --rc genhtml_legend=1 00:43:45.315 --rc geninfo_all_blocks=1 00:43:45.315 --rc geninfo_unexecuted_blocks=1 00:43:45.315 00:43:45.315 ' 00:43:45.315 23:25:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:43:45.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:45.315 --rc genhtml_branch_coverage=1 00:43:45.315 --rc genhtml_function_coverage=1 00:43:45.315 --rc genhtml_legend=1 00:43:45.315 --rc geninfo_all_blocks=1 00:43:45.316 --rc geninfo_unexecuted_blocks=1 00:43:45.316 00:43:45.316 ' 00:43:45.316 23:25:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:43:45.316 23:25:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:43:45.316 23:25:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:43:45.316 23:25:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:43:45.316 23:25:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:43:45.316 23:25:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:43:45.316 23:25:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:43:45.316 23:25:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:43:45.316 23:25:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:43:45.316 23:25:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:43:45.316 23:25:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:43:45.316 23:25:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:43:45.316 23:25:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:43:45.316 23:25:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:43:45.316 23:25:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:43:45.316 23:25:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:43:45.316 23:25:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:43:45.316 23:25:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:43:45.316 23:25:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:43:45.316 23:25:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:43:45.316 23:25:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:43:45.316 23:25:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:43:45.316 23:25:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:43:45.316 23:25:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:43:45.316 23:25:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:43:45.316 23:25:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:43:45.316 23:25:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:43:45.316 23:25:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:43:45.316 23:25:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:43:45.316 23:25:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:43:45.316 23:25:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:43:45.316 23:25:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:43:45.316 23:25:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:43:45.316 23:25:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:43:45.316 23:25:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:43:45.316 23:25:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:43:45.316 23:25:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:43:45.316 23:25:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:43:45.316 23:25:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:43:45.316 23:25:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:43:45.316 23:25:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:43:45.316 23:25:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:43:45.316 23:25:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:43:45.316 23:25:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:43:45.316 23:25:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:43:45.316 23:25:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:43:45.316 23:25:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=82216 00:43:45.316 23:25:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:43:45.316 23:25:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:43:45.316 23:25:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 82216 00:43:45.316 23:25:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 82216 ']' 00:43:45.316 23:25:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:45.316 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:45.316 23:25:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:43:45.316 23:25:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:45.316 23:25:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:43:45.316 23:25:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:43:45.577 [2024-12-09 23:25:23.849029] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:43:45.577 [2024-12-09 23:25:23.849431] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82216 ] 00:43:45.577 [2024-12-09 23:25:24.016000] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:45.838 [2024-12-09 23:25:24.165633] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:43:46.785 23:25:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:43:46.785 23:25:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:43:46.785 23:25:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:43:46.785 23:25:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:43:46.785 23:25:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:43:46.785 23:25:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:43:46.785 23:25:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:43:46.785 23:25:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:43:46.785 23:25:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:43:46.785 23:25:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:43:46.785 23:25:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:43:46.785 23:25:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:43:46.785 23:25:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:43:46.785 23:25:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:43:46.785 23:25:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:43:46.785 23:25:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:43:46.785 23:25:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:43:46.785 23:25:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:43:46.785 23:25:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:43:46.785 23:25:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:43:46.785 23:25:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:43:46.785 23:25:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:43:46.785 23:25:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:43:47.048 23:25:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:43:47.048 23:25:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:43:47.048 23:25:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:43:47.048 23:25:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=basen1 00:43:47.048 23:25:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:43:47.048 23:25:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:43:47.048 23:25:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:43:47.048 23:25:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:43:47.048 23:25:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:43:47.048 { 00:43:47.048 "name": "basen1", 00:43:47.048 "aliases": [ 00:43:47.048 "cf4abd6a-0d89-4f61-aad9-43166b885074" 00:43:47.048 ], 00:43:47.048 "product_name": "NVMe disk", 00:43:47.048 "block_size": 4096, 00:43:47.048 "num_blocks": 1310720, 00:43:47.048 "uuid": "cf4abd6a-0d89-4f61-aad9-43166b885074", 00:43:47.048 "numa_id": -1, 00:43:47.048 "assigned_rate_limits": { 00:43:47.048 "rw_ios_per_sec": 0, 00:43:47.048 "rw_mbytes_per_sec": 0, 00:43:47.048 "r_mbytes_per_sec": 0, 00:43:47.048 "w_mbytes_per_sec": 0 00:43:47.048 }, 00:43:47.048 "claimed": true, 00:43:47.048 "claim_type": "read_many_write_one", 00:43:47.048 "zoned": false, 00:43:47.048 "supported_io_types": { 00:43:47.048 "read": true, 00:43:47.048 "write": true, 00:43:47.048 "unmap": true, 00:43:47.048 "flush": true, 00:43:47.048 "reset": true, 00:43:47.048 "nvme_admin": true, 00:43:47.048 "nvme_io": true, 00:43:47.048 "nvme_io_md": false, 00:43:47.048 "write_zeroes": true, 00:43:47.048 "zcopy": false, 00:43:47.048 "get_zone_info": false, 00:43:47.048 "zone_management": false, 00:43:47.048 "zone_append": false, 00:43:47.048 "compare": true, 00:43:47.048 "compare_and_write": false, 00:43:47.048 "abort": true, 00:43:47.048 "seek_hole": false, 00:43:47.048 "seek_data": false, 00:43:47.048 "copy": true, 00:43:47.048 "nvme_iov_md": false 00:43:47.048 }, 00:43:47.048 "driver_specific": { 00:43:47.048 "nvme": [ 00:43:47.048 { 00:43:47.048 "pci_address": "0000:00:11.0", 00:43:47.048 "trid": { 00:43:47.048 "trtype": "PCIe", 00:43:47.048 "traddr": "0000:00:11.0" 00:43:47.048 }, 00:43:47.048 "ctrlr_data": { 00:43:47.048 "cntlid": 0, 00:43:47.048 "vendor_id": "0x1b36", 00:43:47.048 "model_number": "QEMU NVMe Ctrl", 00:43:47.048 "serial_number": "12341", 00:43:47.048 "firmware_revision": "8.0.0", 00:43:47.048 "subnqn": "nqn.2019-08.org.qemu:12341", 00:43:47.048 "oacs": { 00:43:47.048 "security": 0, 00:43:47.048 "format": 1, 00:43:47.048 "firmware": 0, 00:43:47.048 "ns_manage": 1 00:43:47.048 }, 00:43:47.048 "multi_ctrlr": false, 00:43:47.048 "ana_reporting": false 00:43:47.048 }, 00:43:47.048 "vs": { 00:43:47.048 "nvme_version": "1.4" 00:43:47.048 }, 00:43:47.048 "ns_data": { 00:43:47.048 "id": 1, 00:43:47.048 "can_share": false 00:43:47.048 } 00:43:47.048 } 00:43:47.048 ], 00:43:47.048 "mp_policy": "active_passive" 00:43:47.048 } 00:43:47.048 } 00:43:47.048 ]' 00:43:47.048 23:25:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:43:47.308 23:25:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:43:47.308 23:25:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:43:47.308 23:25:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:43:47.308 23:25:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:43:47.308 23:25:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:43:47.308 23:25:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:43:47.308 23:25:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:43:47.308 23:25:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:43:47.308 23:25:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:43:47.308 23:25:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:43:47.569 23:25:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=ff39f0bb-5a33-4fca-8826-5515983a29a0 00:43:47.569 23:25:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:43:47.569 23:25:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ff39f0bb-5a33-4fca-8826-5515983a29a0 00:43:47.569 23:25:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:43:47.830 23:25:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=2409bf62-48e2-4016-99b2-9957d501ec02 00:43:47.830 23:25:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 2409bf62-48e2-4016-99b2-9957d501ec02 00:43:48.090 23:25:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=8bb34a4c-730c-43f1-b8fc-d2e028df1572 00:43:48.090 23:25:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z 8bb34a4c-730c-43f1-b8fc-d2e028df1572 ]] 00:43:48.090 23:25:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 8bb34a4c-730c-43f1-b8fc-d2e028df1572 5120 00:43:48.090 23:25:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:43:48.090 23:25:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:43:48.090 23:25:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=8bb34a4c-730c-43f1-b8fc-d2e028df1572 00:43:48.090 23:25:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:43:48.090 23:25:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size 8bb34a4c-730c-43f1-b8fc-d2e028df1572 00:43:48.090 23:25:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=8bb34a4c-730c-43f1-b8fc-d2e028df1572 00:43:48.090 23:25:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:43:48.090 23:25:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:43:48.090 23:25:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:43:48.090 23:25:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8bb34a4c-730c-43f1-b8fc-d2e028df1572 00:43:48.348 23:25:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:43:48.348 { 00:43:48.348 "name": "8bb34a4c-730c-43f1-b8fc-d2e028df1572", 00:43:48.348 "aliases": [ 00:43:48.348 "lvs/basen1p0" 00:43:48.348 ], 00:43:48.348 "product_name": "Logical Volume", 00:43:48.348 "block_size": 4096, 00:43:48.348 "num_blocks": 5242880, 00:43:48.348 "uuid": "8bb34a4c-730c-43f1-b8fc-d2e028df1572", 00:43:48.348 "assigned_rate_limits": { 00:43:48.348 "rw_ios_per_sec": 0, 00:43:48.348 "rw_mbytes_per_sec": 0, 00:43:48.348 "r_mbytes_per_sec": 0, 00:43:48.348 "w_mbytes_per_sec": 0 00:43:48.348 }, 00:43:48.348 "claimed": false, 00:43:48.348 "zoned": false, 00:43:48.348 "supported_io_types": { 00:43:48.348 "read": true, 00:43:48.348 "write": true, 00:43:48.348 "unmap": true, 00:43:48.348 "flush": false, 00:43:48.348 "reset": true, 00:43:48.348 "nvme_admin": false, 00:43:48.348 "nvme_io": false, 00:43:48.348 "nvme_io_md": false, 00:43:48.348 "write_zeroes": true, 00:43:48.348 "zcopy": false, 00:43:48.348 "get_zone_info": false, 00:43:48.348 "zone_management": false, 00:43:48.348 "zone_append": false, 00:43:48.348 "compare": false, 00:43:48.348 "compare_and_write": false, 00:43:48.348 "abort": false, 00:43:48.348 "seek_hole": true, 00:43:48.348 "seek_data": true, 00:43:48.348 "copy": false, 00:43:48.348 "nvme_iov_md": false 00:43:48.348 }, 00:43:48.348 "driver_specific": { 00:43:48.348 "lvol": { 00:43:48.348 "lvol_store_uuid": "2409bf62-48e2-4016-99b2-9957d501ec02", 00:43:48.348 "base_bdev": "basen1", 00:43:48.348 "thin_provision": true, 00:43:48.348 "num_allocated_clusters": 0, 00:43:48.348 "snapshot": false, 00:43:48.348 "clone": false, 00:43:48.348 "esnap_clone": false 00:43:48.348 } 00:43:48.348 } 00:43:48.348 } 00:43:48.348 ]' 00:43:48.348 23:25:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:43:48.348 23:25:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:43:48.348 23:25:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:43:48.348 23:25:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=5242880 00:43:48.348 23:25:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=20480 00:43:48.348 23:25:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 20480 00:43:48.348 23:25:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:43:48.348 23:25:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:43:48.348 23:25:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:43:48.606 23:25:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:43:48.606 23:25:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:43:48.606 23:25:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:43:48.864 23:25:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:43:48.864 23:25:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:43:48.864 23:25:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 8bb34a4c-730c-43f1-b8fc-d2e028df1572 -c cachen1p0 --l2p_dram_limit 2 00:43:49.125 [2024-12-09 23:25:27.359648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:43:49.125 [2024-12-09 23:25:27.359779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:43:49.125 [2024-12-09 23:25:27.359799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:43:49.125 [2024-12-09 23:25:27.359807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:43:49.125 [2024-12-09 23:25:27.359859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:43:49.125 [2024-12-09 23:25:27.359868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:43:49.125 [2024-12-09 23:25:27.359876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.035 ms 00:43:49.125 [2024-12-09 23:25:27.359883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:43:49.125 [2024-12-09 23:25:27.359900] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:43:49.125 [2024-12-09 23:25:27.360449] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:43:49.125 [2024-12-09 23:25:27.360469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:43:49.125 [2024-12-09 23:25:27.360475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:43:49.125 [2024-12-09 23:25:27.360486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.570 ms 00:43:49.125 [2024-12-09 23:25:27.360492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:43:49.125 [2024-12-09 23:25:27.360541] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID f00f46ed-ee91-4188-a824-8debf9a8e345 00:43:49.125 [2024-12-09 23:25:27.361832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:43:49.125 [2024-12-09 23:25:27.361856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:43:49.125 [2024-12-09 23:25:27.361866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:43:49.125 [2024-12-09 23:25:27.361875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:43:49.125 [2024-12-09 23:25:27.368674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:43:49.125 [2024-12-09 23:25:27.368706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:43:49.125 [2024-12-09 23:25:27.368714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.766 ms 00:43:49.125 [2024-12-09 23:25:27.368721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:43:49.125 [2024-12-09 23:25:27.368755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:43:49.125 [2024-12-09 23:25:27.368763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:43:49.125 [2024-12-09 23:25:27.368769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:43:49.125 [2024-12-09 23:25:27.368779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:43:49.125 [2024-12-09 23:25:27.368816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:43:49.125 [2024-12-09 23:25:27.368826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:43:49.125 [2024-12-09 23:25:27.368836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:43:49.125 [2024-12-09 23:25:27.368843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:43:49.125 [2024-12-09 23:25:27.368860] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:43:49.125 [2024-12-09 23:25:27.372099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:43:49.125 [2024-12-09 23:25:27.372122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:43:49.125 [2024-12-09 23:25:27.372133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.241 ms 00:43:49.125 [2024-12-09 23:25:27.372139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:43:49.125 [2024-12-09 23:25:27.372163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:43:49.125 [2024-12-09 23:25:27.372169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:43:49.125 [2024-12-09 23:25:27.372178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:43:49.125 [2024-12-09 23:25:27.372184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:43:49.125 [2024-12-09 23:25:27.372203] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:43:49.125 [2024-12-09 23:25:27.372333] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:43:49.125 [2024-12-09 23:25:27.372347] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:43:49.125 [2024-12-09 23:25:27.372356] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:43:49.125 [2024-12-09 23:25:27.372367] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:43:49.125 [2024-12-09 23:25:27.372373] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:43:49.125 [2024-12-09 23:25:27.372381] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:43:49.125 [2024-12-09 23:25:27.372387] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:43:49.125 [2024-12-09 23:25:27.372398] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:43:49.125 [2024-12-09 23:25:27.372403] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:43:49.125 [2024-12-09 23:25:27.372411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:43:49.125 [2024-12-09 23:25:27.372417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:43:49.125 [2024-12-09 23:25:27.372425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.209 ms 00:43:49.125 [2024-12-09 23:25:27.372431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:43:49.125 [2024-12-09 23:25:27.372498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:43:49.125 [2024-12-09 23:25:27.372511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:43:49.125 [2024-12-09 23:25:27.372518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:43:49.125 [2024-12-09 23:25:27.372524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:43:49.125 [2024-12-09 23:25:27.372602] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:43:49.125 [2024-12-09 23:25:27.372614] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:43:49.125 [2024-12-09 23:25:27.372622] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:43:49.125 [2024-12-09 23:25:27.372628] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:43:49.125 [2024-12-09 23:25:27.372636] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:43:49.125 [2024-12-09 23:25:27.372643] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:43:49.125 [2024-12-09 23:25:27.372650] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:43:49.125 [2024-12-09 23:25:27.372655] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:43:49.125 [2024-12-09 23:25:27.372661] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:43:49.125 [2024-12-09 23:25:27.372666] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:43:49.125 [2024-12-09 23:25:27.372673] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:43:49.125 [2024-12-09 23:25:27.372678] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:43:49.125 [2024-12-09 23:25:27.372686] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:43:49.125 [2024-12-09 23:25:27.372691] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:43:49.125 [2024-12-09 23:25:27.372698] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:43:49.125 [2024-12-09 23:25:27.372703] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:43:49.125 [2024-12-09 23:25:27.372712] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:43:49.125 [2024-12-09 23:25:27.372716] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:43:49.125 [2024-12-09 23:25:27.372723] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:43:49.126 [2024-12-09 23:25:27.372728] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:43:49.126 [2024-12-09 23:25:27.372735] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:43:49.126 [2024-12-09 23:25:27.372741] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:43:49.126 [2024-12-09 23:25:27.372749] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:43:49.126 [2024-12-09 23:25:27.372754] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:43:49.126 [2024-12-09 23:25:27.372761] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:43:49.126 [2024-12-09 23:25:27.372767] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:43:49.126 [2024-12-09 23:25:27.372774] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:43:49.126 [2024-12-09 23:25:27.372779] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:43:49.126 [2024-12-09 23:25:27.372785] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:43:49.126 [2024-12-09 23:25:27.372790] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:43:49.126 [2024-12-09 23:25:27.372798] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:43:49.126 [2024-12-09 23:25:27.372803] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:43:49.126 [2024-12-09 23:25:27.372810] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:43:49.126 [2024-12-09 23:25:27.372815] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:43:49.126 [2024-12-09 23:25:27.372821] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:43:49.126 [2024-12-09 23:25:27.372827] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:43:49.126 [2024-12-09 23:25:27.372833] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:43:49.126 [2024-12-09 23:25:27.372838] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:43:49.126 [2024-12-09 23:25:27.372846] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:43:49.126 [2024-12-09 23:25:27.372851] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:43:49.126 [2024-12-09 23:25:27.372859] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:43:49.126 [2024-12-09 23:25:27.372864] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:43:49.126 [2024-12-09 23:25:27.372870] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:43:49.126 [2024-12-09 23:25:27.372875] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:43:49.126 [2024-12-09 23:25:27.372883] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:43:49.126 [2024-12-09 23:25:27.372889] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:43:49.126 [2024-12-09 23:25:27.372897] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:43:49.126 [2024-12-09 23:25:27.372902] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:43:49.126 [2024-12-09 23:25:27.372910] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:43:49.126 [2024-12-09 23:25:27.372915] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:43:49.126 [2024-12-09 23:25:27.372922] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:43:49.126 [2024-12-09 23:25:27.372927] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:43:49.126 [2024-12-09 23:25:27.372933] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:43:49.126 [2024-12-09 23:25:27.372940] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:43:49.126 [2024-12-09 23:25:27.372951] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:43:49.126 [2024-12-09 23:25:27.372959] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:43:49.126 [2024-12-09 23:25:27.372967] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:43:49.126 [2024-12-09 23:25:27.372973] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:43:49.126 [2024-12-09 23:25:27.372980] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:43:49.126 [2024-12-09 23:25:27.372986] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:43:49.126 [2024-12-09 23:25:27.372993] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:43:49.126 [2024-12-09 23:25:27.372998] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:43:49.126 [2024-12-09 23:25:27.373005] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:43:49.126 [2024-12-09 23:25:27.373011] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:43:49.126 [2024-12-09 23:25:27.373020] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:43:49.126 [2024-12-09 23:25:27.373026] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:43:49.126 [2024-12-09 23:25:27.373032] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:43:49.126 [2024-12-09 23:25:27.373039] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:43:49.126 [2024-12-09 23:25:27.373047] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:43:49.126 [2024-12-09 23:25:27.373052] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:43:49.126 [2024-12-09 23:25:27.373059] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:43:49.126 [2024-12-09 23:25:27.373066] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:43:49.126 [2024-12-09 23:25:27.373073] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:43:49.126 [2024-12-09 23:25:27.373079] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:43:49.126 [2024-12-09 23:25:27.373085] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:43:49.126 [2024-12-09 23:25:27.373091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:43:49.126 [2024-12-09 23:25:27.373098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:43:49.126 [2024-12-09 23:25:27.373104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.543 ms 00:43:49.126 [2024-12-09 23:25:27.373111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:43:49.126 [2024-12-09 23:25:27.373151] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:43:49.126 [2024-12-09 23:25:27.373163] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:43:54.417 [2024-12-09 23:25:32.770454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:43:54.417 [2024-12-09 23:25:32.770538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:43:54.417 [2024-12-09 23:25:32.770557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5397.284 ms 00:43:54.417 [2024-12-09 23:25:32.770571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:43:54.417 [2024-12-09 23:25:32.802491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:43:54.417 [2024-12-09 23:25:32.802558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:43:54.417 [2024-12-09 23:25:32.802573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 31.671 ms 00:43:54.418 [2024-12-09 23:25:32.802584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:43:54.418 [2024-12-09 23:25:32.802670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:43:54.418 [2024-12-09 23:25:32.802684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:43:54.418 [2024-12-09 23:25:32.802694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:43:54.418 [2024-12-09 23:25:32.802714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:43:54.418 [2024-12-09 23:25:32.838651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:43:54.418 [2024-12-09 23:25:32.838702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:43:54.418 [2024-12-09 23:25:32.838714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 35.901 ms 00:43:54.418 [2024-12-09 23:25:32.838725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:43:54.418 [2024-12-09 23:25:32.838759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:43:54.418 [2024-12-09 23:25:32.838774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:43:54.418 [2024-12-09 23:25:32.838783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:43:54.418 [2024-12-09 23:25:32.838793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:43:54.418 [2024-12-09 23:25:32.839431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:43:54.418 [2024-12-09 23:25:32.839462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:43:54.418 [2024-12-09 23:25:32.839481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.568 ms 00:43:54.418 [2024-12-09 23:25:32.839492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:43:54.418 [2024-12-09 23:25:32.839538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:43:54.418 [2024-12-09 23:25:32.839549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:43:54.418 [2024-12-09 23:25:32.839562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:43:54.418 [2024-12-09 23:25:32.839575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:43:54.418 [2024-12-09 23:25:32.857330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:43:54.418 [2024-12-09 23:25:32.857550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:43:54.418 [2024-12-09 23:25:32.857572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.736 ms 00:43:54.418 [2024-12-09 23:25:32.857583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:43:54.677 [2024-12-09 23:25:32.883929] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:43:54.677 [2024-12-09 23:25:32.885495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:43:54.677 [2024-12-09 23:25:32.885542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:43:54.677 [2024-12-09 23:25:32.885561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 27.813 ms 00:43:54.677 [2024-12-09 23:25:32.885571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:43:54.678 [2024-12-09 23:25:32.918621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:43:54.678 [2024-12-09 23:25:32.918827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:43:54.678 [2024-12-09 23:25:32.918857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 32.999 ms 00:43:54.678 [2024-12-09 23:25:32.918867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:43:54.678 [2024-12-09 23:25:32.919080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:43:54.678 [2024-12-09 23:25:32.919113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:43:54.678 [2024-12-09 23:25:32.919132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.063 ms 00:43:54.678 [2024-12-09 23:25:32.919140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:43:54.678 [2024-12-09 23:25:32.944924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:43:54.678 [2024-12-09 23:25:32.944969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:43:54.678 [2024-12-09 23:25:32.944986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.723 ms 00:43:54.678 [2024-12-09 23:25:32.944996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:43:54.678 [2024-12-09 23:25:32.970839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:43:54.678 [2024-12-09 23:25:32.970885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:43:54.678 [2024-12-09 23:25:32.970900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.785 ms 00:43:54.678 [2024-12-09 23:25:32.970908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:43:54.678 [2024-12-09 23:25:32.971554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:43:54.678 [2024-12-09 23:25:32.971586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:43:54.678 [2024-12-09 23:25:32.971598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.597 ms 00:43:54.678 [2024-12-09 23:25:32.971609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:43:54.678 [2024-12-09 23:25:33.066090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:43:54.678 [2024-12-09 23:25:33.066143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:43:54.678 [2024-12-09 23:25:33.066164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 94.417 ms 00:43:54.678 [2024-12-09 23:25:33.066173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:43:54.678 [2024-12-09 23:25:33.093888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:43:54.678 [2024-12-09 23:25:33.093937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:43:54.678 [2024-12-09 23:25:33.093954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 27.595 ms 00:43:54.678 [2024-12-09 23:25:33.093963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:43:54.678 [2024-12-09 23:25:33.119858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:43:54.678 [2024-12-09 23:25:33.119906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:43:54.678 [2024-12-09 23:25:33.119921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.840 ms 00:43:54.678 [2024-12-09 23:25:33.119929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:43:54.936 [2024-12-09 23:25:33.146295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:43:54.936 [2024-12-09 23:25:33.146343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:43:54.936 [2024-12-09 23:25:33.146359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 26.313 ms 00:43:54.936 [2024-12-09 23:25:33.146366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:43:54.936 [2024-12-09 23:25:33.146422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:43:54.936 [2024-12-09 23:25:33.146432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:43:54.936 [2024-12-09 23:25:33.146447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:43:54.936 [2024-12-09 23:25:33.146455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:43:54.936 [2024-12-09 23:25:33.146563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:43:54.936 [2024-12-09 23:25:33.146577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:43:54.936 [2024-12-09 23:25:33.146588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:43:54.936 [2024-12-09 23:25:33.146596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:43:54.936 [2024-12-09 23:25:33.147775] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 5787.575 ms, result 0 00:43:54.936 { 00:43:54.936 "name": "ftl", 00:43:54.936 "uuid": "f00f46ed-ee91-4188-a824-8debf9a8e345" 00:43:54.936 } 00:43:54.936 23:25:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:43:54.936 [2024-12-09 23:25:33.366851] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:54.936 23:25:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:43:55.193 23:25:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:43:55.450 [2024-12-09 23:25:33.759256] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:43:55.450 23:25:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:43:55.708 [2024-12-09 23:25:33.955641] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:43:55.708 23:25:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:43:55.967 Fill FTL, iteration 1 00:43:55.967 23:25:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:43:55.967 23:25:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:43:55.967 23:25:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:43:55.967 23:25:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:43:55.967 23:25:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:43:55.967 23:25:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:43:55.967 23:25:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:43:55.967 23:25:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:43:55.967 23:25:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:43:55.967 23:25:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:43:55.967 23:25:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:43:55.967 23:25:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:43:55.967 23:25:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:43:55.967 23:25:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:43:55.967 23:25:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:43:55.967 23:25:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:43:55.967 23:25:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:43:55.967 23:25:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=82360 00:43:55.967 23:25:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:43:55.967 23:25:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 82360 /var/tmp/spdk.tgt.sock 00:43:55.967 23:25:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 82360 ']' 00:43:55.967 23:25:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:43:55.967 23:25:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:43:55.967 23:25:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:43:55.967 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:43:55.967 23:25:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:43:55.967 23:25:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:43:55.967 [2024-12-09 23:25:34.371061] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:43:55.967 [2024-12-09 23:25:34.371305] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82360 ] 00:43:56.227 [2024-12-09 23:25:34.536657] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:56.227 [2024-12-09 23:25:34.638832] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:43:56.798 23:25:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:43:56.798 23:25:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:43:56.798 23:25:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:43:57.057 ftln1 00:43:57.057 23:25:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:43:57.057 23:25:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:43:57.315 23:25:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:43:57.315 23:25:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 82360 00:43:57.315 23:25:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 82360 ']' 00:43:57.315 23:25:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 82360 00:43:57.315 23:25:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:43:57.315 23:25:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:43:57.315 23:25:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82360 00:43:57.315 killing process with pid 82360 00:43:57.315 23:25:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:43:57.315 23:25:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:43:57.315 23:25:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82360' 00:43:57.315 23:25:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 82360 00:43:57.315 23:25:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 82360 00:43:59.216 23:25:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:43:59.217 23:25:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:43:59.217 [2024-12-09 23:25:37.224028] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:43:59.217 [2024-12-09 23:25:37.224144] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82402 ] 00:43:59.217 [2024-12-09 23:25:37.384555] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:59.217 [2024-12-09 23:25:37.483345] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:44:00.592  [2024-12-09T23:25:39.989Z] Copying: 196/1024 [MB] (196 MBps) [2024-12-09T23:25:40.927Z] Copying: 397/1024 [MB] (201 MBps) [2024-12-09T23:25:41.897Z] Copying: 650/1024 [MB] (253 MBps) [2024-12-09T23:25:42.471Z] Copying: 889/1024 [MB] (239 MBps) [2024-12-09T23:25:43.044Z] Copying: 1024/1024 [MB] (average 224 MBps) 00:44:04.582 00:44:04.582 23:25:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:44:04.582 Calculate MD5 checksum, iteration 1 00:44:04.582 23:25:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:44:04.582 23:25:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:44:04.582 23:25:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:44:04.582 23:25:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:44:04.582 23:25:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:44:04.582 23:25:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:44:04.582 23:25:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:44:04.843 [2024-12-09 23:25:43.052057] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:44:04.843 [2024-12-09 23:25:43.052322] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82466 ] 00:44:04.843 [2024-12-09 23:25:43.205718] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:04.843 [2024-12-09 23:25:43.301319] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:44:06.225  [2024-12-09T23:25:45.263Z] Copying: 686/1024 [MB] (686 MBps) [2024-12-09T23:25:45.837Z] Copying: 1024/1024 [MB] (average 682 MBps) 00:44:07.375 00:44:07.375 23:25:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:44:07.375 23:25:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:44:09.288 23:25:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:44:09.288 Fill FTL, iteration 2 00:44:09.288 23:25:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=1689d398e8e87a4d40bc8a8169540af9 00:44:09.288 23:25:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:44:09.288 23:25:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:44:09.288 23:25:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:44:09.288 23:25:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:44:09.288 23:25:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:44:09.288 23:25:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:44:09.288 23:25:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:44:09.288 23:25:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:44:09.288 23:25:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:44:09.288 [2024-12-09 23:25:47.308008] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:44:09.288 [2024-12-09 23:25:47.308253] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82516 ] 00:44:09.288 [2024-12-09 23:25:47.463609] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:09.288 [2024-12-09 23:25:47.543765] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:44:10.671  [2024-12-09T23:25:50.077Z] Copying: 250/1024 [MB] (250 MBps) [2024-12-09T23:25:51.019Z] Copying: 493/1024 [MB] (243 MBps) [2024-12-09T23:25:51.963Z] Copying: 742/1024 [MB] (249 MBps) [2024-12-09T23:25:52.224Z] Copying: 990/1024 [MB] (248 MBps) [2024-12-09T23:25:52.797Z] Copying: 1024/1024 [MB] (average 247 MBps) 00:44:14.335 00:44:14.335 Calculate MD5 checksum, iteration 2 00:44:14.335 23:25:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:44:14.335 23:25:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:44:14.335 23:25:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:44:14.335 23:25:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:44:14.335 23:25:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:44:14.335 23:25:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:44:14.335 23:25:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:44:14.335 23:25:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:44:14.335 [2024-12-09 23:25:52.648985] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:44:14.335 [2024-12-09 23:25:52.649108] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82569 ] 00:44:14.596 [2024-12-09 23:25:52.808455] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:14.596 [2024-12-09 23:25:52.941376] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:44:16.553  [2024-12-09T23:25:55.273Z] Copying: 670/1024 [MB] (670 MBps) [2024-12-09T23:25:56.204Z] Copying: 1024/1024 [MB] (average 651 MBps) 00:44:17.742 00:44:17.742 23:25:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:44:17.742 23:25:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:44:20.269 23:25:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:44:20.270 23:25:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=fdbc617fb55d47adf5f3935a8cd3e637 00:44:20.270 23:25:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:44:20.270 23:25:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:44:20.270 23:25:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:44:20.270 [2024-12-09 23:25:58.269975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:44:20.270 [2024-12-09 23:25:58.270032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:44:20.270 [2024-12-09 23:25:58.270044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:44:20.270 [2024-12-09 23:25:58.270052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:44:20.270 [2024-12-09 23:25:58.270072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:44:20.270 [2024-12-09 23:25:58.270082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:44:20.270 [2024-12-09 23:25:58.270089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:44:20.270 [2024-12-09 23:25:58.270096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:44:20.270 [2024-12-09 23:25:58.270112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:44:20.270 [2024-12-09 23:25:58.270119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:44:20.270 [2024-12-09 23:25:58.270126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:44:20.270 [2024-12-09 23:25:58.270132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:44:20.270 [2024-12-09 23:25:58.270186] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.206 ms, result 0 00:44:20.270 true 00:44:20.270 23:25:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:44:20.270 { 00:44:20.270 "name": "ftl", 00:44:20.270 "properties": [ 00:44:20.270 { 00:44:20.270 "name": "superblock_version", 00:44:20.270 "value": 5, 00:44:20.270 "read-only": true 00:44:20.270 }, 00:44:20.270 { 00:44:20.270 "name": "base_device", 00:44:20.270 "bands": [ 00:44:20.270 { 00:44:20.270 "id": 0, 00:44:20.270 "state": "FREE", 00:44:20.270 "validity": 0.0 00:44:20.270 }, 00:44:20.270 { 00:44:20.270 "id": 1, 00:44:20.270 "state": "FREE", 00:44:20.270 "validity": 0.0 00:44:20.270 }, 00:44:20.270 { 00:44:20.270 "id": 2, 00:44:20.270 "state": "FREE", 00:44:20.270 "validity": 0.0 00:44:20.270 }, 00:44:20.270 { 00:44:20.270 "id": 3, 00:44:20.270 "state": "FREE", 00:44:20.270 "validity": 0.0 00:44:20.270 }, 00:44:20.270 { 00:44:20.270 "id": 4, 00:44:20.270 "state": "FREE", 00:44:20.270 "validity": 0.0 00:44:20.270 }, 00:44:20.270 { 00:44:20.270 "id": 5, 00:44:20.270 "state": "FREE", 00:44:20.270 "validity": 0.0 00:44:20.270 }, 00:44:20.270 { 00:44:20.270 "id": 6, 00:44:20.270 "state": "FREE", 00:44:20.270 "validity": 0.0 00:44:20.270 }, 00:44:20.270 { 00:44:20.270 "id": 7, 00:44:20.270 "state": "FREE", 00:44:20.270 "validity": 0.0 00:44:20.270 }, 00:44:20.270 { 00:44:20.270 "id": 8, 00:44:20.270 "state": "FREE", 00:44:20.270 "validity": 0.0 00:44:20.270 }, 00:44:20.270 { 00:44:20.270 "id": 9, 00:44:20.270 "state": "FREE", 00:44:20.270 "validity": 0.0 00:44:20.270 }, 00:44:20.270 { 00:44:20.270 "id": 10, 00:44:20.270 "state": "FREE", 00:44:20.270 "validity": 0.0 00:44:20.270 }, 00:44:20.270 { 00:44:20.270 "id": 11, 00:44:20.270 "state": "FREE", 00:44:20.270 "validity": 0.0 00:44:20.270 }, 00:44:20.270 { 00:44:20.270 "id": 12, 00:44:20.270 "state": "FREE", 00:44:20.270 "validity": 0.0 00:44:20.270 }, 00:44:20.270 { 00:44:20.270 "id": 13, 00:44:20.270 "state": "FREE", 00:44:20.270 "validity": 0.0 00:44:20.270 }, 00:44:20.270 { 00:44:20.270 "id": 14, 00:44:20.270 "state": "FREE", 00:44:20.270 "validity": 0.0 00:44:20.270 }, 00:44:20.270 { 00:44:20.270 "id": 15, 00:44:20.270 "state": "FREE", 00:44:20.270 "validity": 0.0 00:44:20.270 }, 00:44:20.270 { 00:44:20.270 "id": 16, 00:44:20.270 "state": "FREE", 00:44:20.270 "validity": 0.0 00:44:20.270 }, 00:44:20.270 { 00:44:20.270 "id": 17, 00:44:20.270 "state": "FREE", 00:44:20.270 "validity": 0.0 00:44:20.270 } 00:44:20.270 ], 00:44:20.270 "read-only": true 00:44:20.270 }, 00:44:20.270 { 00:44:20.270 "name": "cache_device", 00:44:20.270 "type": "bdev", 00:44:20.270 "chunks": [ 00:44:20.270 { 00:44:20.270 "id": 0, 00:44:20.270 "state": "INACTIVE", 00:44:20.270 "utilization": 0.0 00:44:20.270 }, 00:44:20.270 { 00:44:20.270 "id": 1, 00:44:20.270 "state": "CLOSED", 00:44:20.270 "utilization": 1.0 00:44:20.270 }, 00:44:20.270 { 00:44:20.270 "id": 2, 00:44:20.270 "state": "CLOSED", 00:44:20.270 "utilization": 1.0 00:44:20.270 }, 00:44:20.270 { 00:44:20.270 "id": 3, 00:44:20.270 "state": "OPEN", 00:44:20.270 "utilization": 0.001953125 00:44:20.270 }, 00:44:20.270 { 00:44:20.270 "id": 4, 00:44:20.270 "state": "OPEN", 00:44:20.270 "utilization": 0.0 00:44:20.270 } 00:44:20.270 ], 00:44:20.270 "read-only": true 00:44:20.270 }, 00:44:20.270 { 00:44:20.270 "name": "verbose_mode", 00:44:20.270 "value": true, 00:44:20.270 "unit": "", 00:44:20.270 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:44:20.270 }, 00:44:20.270 { 00:44:20.270 "name": "prep_upgrade_on_shutdown", 00:44:20.270 "value": false, 00:44:20.270 "unit": "", 00:44:20.270 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:44:20.270 } 00:44:20.270 ] 00:44:20.270 } 00:44:20.270 23:25:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:44:20.270 [2024-12-09 23:25:58.630264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:44:20.270 [2024-12-09 23:25:58.632093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:44:20.270 [2024-12-09 23:25:58.632425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:44:20.270 [2024-12-09 23:25:58.632610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:44:20.270 [2024-12-09 23:25:58.632881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:44:20.270 [2024-12-09 23:25:58.633080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:44:20.270 [2024-12-09 23:25:58.633258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:44:20.270 [2024-12-09 23:25:58.633466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:44:20.270 [2024-12-09 23:25:58.633756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:44:20.270 true 00:44:20.270 [2024-12-09 23:25:58.633930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:44:20.270 [2024-12-09 23:25:58.633965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:44:20.270 [2024-12-09 23:25:58.633987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:44:20.270 [2024-12-09 23:25:58.634079] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 3.778 ms, result 0 00:44:20.270 23:25:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:44:20.270 23:25:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:44:20.270 23:25:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:44:20.529 23:25:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:44:20.529 23:25:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:44:20.529 23:25:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:44:20.789 [2024-12-09 23:25:59.118607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:44:20.789 [2024-12-09 23:25:59.118803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:44:20.789 [2024-12-09 23:25:59.118822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:44:20.789 [2024-12-09 23:25:59.118830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:44:20.789 [2024-12-09 23:25:59.118858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:44:20.789 [2024-12-09 23:25:59.118867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:44:20.789 [2024-12-09 23:25:59.118875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:44:20.789 [2024-12-09 23:25:59.118882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:44:20.789 [2024-12-09 23:25:59.118900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:44:20.789 [2024-12-09 23:25:59.118908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:44:20.789 [2024-12-09 23:25:59.118915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:44:20.789 [2024-12-09 23:25:59.118922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:44:20.789 [2024-12-09 23:25:59.118981] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.361 ms, result 0 00:44:20.789 true 00:44:20.789 23:25:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:44:21.051 { 00:44:21.051 "name": "ftl", 00:44:21.051 "properties": [ 00:44:21.051 { 00:44:21.051 "name": "superblock_version", 00:44:21.051 "value": 5, 00:44:21.051 "read-only": true 00:44:21.051 }, 00:44:21.051 { 00:44:21.051 "name": "base_device", 00:44:21.051 "bands": [ 00:44:21.051 { 00:44:21.051 "id": 0, 00:44:21.051 "state": "FREE", 00:44:21.051 "validity": 0.0 00:44:21.051 }, 00:44:21.051 { 00:44:21.051 "id": 1, 00:44:21.051 "state": "FREE", 00:44:21.051 "validity": 0.0 00:44:21.051 }, 00:44:21.051 { 00:44:21.051 "id": 2, 00:44:21.051 "state": "FREE", 00:44:21.051 "validity": 0.0 00:44:21.051 }, 00:44:21.051 { 00:44:21.051 "id": 3, 00:44:21.051 "state": "FREE", 00:44:21.051 "validity": 0.0 00:44:21.051 }, 00:44:21.051 { 00:44:21.051 "id": 4, 00:44:21.051 "state": "FREE", 00:44:21.051 "validity": 0.0 00:44:21.051 }, 00:44:21.051 { 00:44:21.051 "id": 5, 00:44:21.051 "state": "FREE", 00:44:21.051 "validity": 0.0 00:44:21.051 }, 00:44:21.051 { 00:44:21.051 "id": 6, 00:44:21.051 "state": "FREE", 00:44:21.051 "validity": 0.0 00:44:21.051 }, 00:44:21.051 { 00:44:21.051 "id": 7, 00:44:21.051 "state": "FREE", 00:44:21.051 "validity": 0.0 00:44:21.051 }, 00:44:21.051 { 00:44:21.051 "id": 8, 00:44:21.051 "state": "FREE", 00:44:21.051 "validity": 0.0 00:44:21.051 }, 00:44:21.051 { 00:44:21.051 "id": 9, 00:44:21.051 "state": "FREE", 00:44:21.051 "validity": 0.0 00:44:21.051 }, 00:44:21.051 { 00:44:21.051 "id": 10, 00:44:21.051 "state": "FREE", 00:44:21.051 "validity": 0.0 00:44:21.051 }, 00:44:21.051 { 00:44:21.051 "id": 11, 00:44:21.051 "state": "FREE", 00:44:21.051 "validity": 0.0 00:44:21.051 }, 00:44:21.051 { 00:44:21.051 "id": 12, 00:44:21.051 "state": "FREE", 00:44:21.051 "validity": 0.0 00:44:21.051 }, 00:44:21.051 { 00:44:21.051 "id": 13, 00:44:21.051 "state": "FREE", 00:44:21.051 "validity": 0.0 00:44:21.051 }, 00:44:21.051 { 00:44:21.051 "id": 14, 00:44:21.051 "state": "FREE", 00:44:21.051 "validity": 0.0 00:44:21.051 }, 00:44:21.051 { 00:44:21.051 "id": 15, 00:44:21.051 "state": "FREE", 00:44:21.051 "validity": 0.0 00:44:21.051 }, 00:44:21.051 { 00:44:21.051 "id": 16, 00:44:21.051 "state": "FREE", 00:44:21.051 "validity": 0.0 00:44:21.051 }, 00:44:21.051 { 00:44:21.051 "id": 17, 00:44:21.051 "state": "FREE", 00:44:21.051 "validity": 0.0 00:44:21.051 } 00:44:21.051 ], 00:44:21.051 "read-only": true 00:44:21.051 }, 00:44:21.051 { 00:44:21.051 "name": "cache_device", 00:44:21.051 "type": "bdev", 00:44:21.051 "chunks": [ 00:44:21.051 { 00:44:21.051 "id": 0, 00:44:21.051 "state": "INACTIVE", 00:44:21.051 "utilization": 0.0 00:44:21.051 }, 00:44:21.051 { 00:44:21.051 "id": 1, 00:44:21.051 "state": "CLOSED", 00:44:21.051 "utilization": 1.0 00:44:21.051 }, 00:44:21.051 { 00:44:21.051 "id": 2, 00:44:21.051 "state": "CLOSED", 00:44:21.051 "utilization": 1.0 00:44:21.051 }, 00:44:21.051 { 00:44:21.051 "id": 3, 00:44:21.051 "state": "OPEN", 00:44:21.051 "utilization": 0.001953125 00:44:21.051 }, 00:44:21.051 { 00:44:21.051 "id": 4, 00:44:21.051 "state": "OPEN", 00:44:21.051 "utilization": 0.0 00:44:21.051 } 00:44:21.051 ], 00:44:21.051 "read-only": true 00:44:21.051 }, 00:44:21.051 { 00:44:21.051 "name": "verbose_mode", 00:44:21.051 "value": true, 00:44:21.051 "unit": "", 00:44:21.051 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:44:21.051 }, 00:44:21.051 { 00:44:21.051 "name": "prep_upgrade_on_shutdown", 00:44:21.051 "value": true, 00:44:21.051 "unit": "", 00:44:21.051 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:44:21.051 } 00:44:21.051 ] 00:44:21.051 } 00:44:21.051 23:25:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:44:21.051 23:25:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 82216 ]] 00:44:21.051 23:25:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 82216 00:44:21.051 23:25:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 82216 ']' 00:44:21.051 23:25:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 82216 00:44:21.051 23:25:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:44:21.051 23:25:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:21.051 23:25:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82216 00:44:21.051 23:25:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:44:21.051 23:25:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:44:21.051 23:25:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82216' 00:44:21.051 killing process with pid 82216 00:44:21.051 23:25:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 82216 00:44:21.051 23:25:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 82216 00:44:21.625 [2024-12-09 23:26:00.071666] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:44:21.625 [2024-12-09 23:26:00.086608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:44:21.625 [2024-12-09 23:26:00.086769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:44:21.887 [2024-12-09 23:26:00.086844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:44:21.887 [2024-12-09 23:26:00.086869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:44:21.887 [2024-12-09 23:26:00.086910] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:44:21.887 [2024-12-09 23:26:00.089760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:44:21.887 [2024-12-09 23:26:00.089880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:44:21.887 [2024-12-09 23:26:00.089897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.832 ms 00:44:21.887 [2024-12-09 23:26:00.089910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:44:31.882 [2024-12-09 23:26:09.324895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:44:31.882 [2024-12-09 23:26:09.324978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:44:31.882 [2024-12-09 23:26:09.324996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9234.926 ms 00:44:31.882 [2024-12-09 23:26:09.325013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:44:31.882 [2024-12-09 23:26:09.326749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:44:31.882 [2024-12-09 23:26:09.326919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:44:31.882 [2024-12-09 23:26:09.326940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.717 ms 00:44:31.882 [2024-12-09 23:26:09.326949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:44:31.882 [2024-12-09 23:26:09.328172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:44:31.882 [2024-12-09 23:26:09.328214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:44:31.882 [2024-12-09 23:26:09.328238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.105 ms 00:44:31.882 [2024-12-09 23:26:09.328247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:44:31.882 [2024-12-09 23:26:09.339613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:44:31.882 [2024-12-09 23:26:09.339653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:44:31.882 [2024-12-09 23:26:09.339665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.319 ms 00:44:31.882 [2024-12-09 23:26:09.339674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:44:31.882 [2024-12-09 23:26:09.347380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:44:31.882 [2024-12-09 23:26:09.347422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:44:31.882 [2024-12-09 23:26:09.347434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.663 ms 00:44:31.882 [2024-12-09 23:26:09.347443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:44:31.882 [2024-12-09 23:26:09.347554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:44:31.882 [2024-12-09 23:26:09.347565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:44:31.882 [2024-12-09 23:26:09.347582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.067 ms 00:44:31.882 [2024-12-09 23:26:09.347591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:44:31.882 [2024-12-09 23:26:09.358000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:44:31.882 [2024-12-09 23:26:09.358173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:44:31.882 [2024-12-09 23:26:09.358192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.392 ms 00:44:31.882 [2024-12-09 23:26:09.358200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:44:31.882 [2024-12-09 23:26:09.368790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:44:31.882 [2024-12-09 23:26:09.368946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:44:31.882 [2024-12-09 23:26:09.368966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.452 ms 00:44:31.882 [2024-12-09 23:26:09.368974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:44:31.882 [2024-12-09 23:26:09.379290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:44:31.882 [2024-12-09 23:26:09.379447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:44:31.882 [2024-12-09 23:26:09.379465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.198 ms 00:44:31.882 [2024-12-09 23:26:09.379474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:44:31.882 [2024-12-09 23:26:09.389581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:44:31.882 [2024-12-09 23:26:09.389625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:44:31.882 [2024-12-09 23:26:09.389635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.936 ms 00:44:31.882 [2024-12-09 23:26:09.389643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:44:31.882 [2024-12-09 23:26:09.389684] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:44:31.882 [2024-12-09 23:26:09.389710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:44:31.882 [2024-12-09 23:26:09.389720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:44:31.882 [2024-12-09 23:26:09.389729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:44:31.882 [2024-12-09 23:26:09.389737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:44:31.882 [2024-12-09 23:26:09.389745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:44:31.882 [2024-12-09 23:26:09.389753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:44:31.882 [2024-12-09 23:26:09.389761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:44:31.882 [2024-12-09 23:26:09.389769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:44:31.882 [2024-12-09 23:26:09.389776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:44:31.882 [2024-12-09 23:26:09.389784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:44:31.882 [2024-12-09 23:26:09.389793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:44:31.882 [2024-12-09 23:26:09.389800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:44:31.882 [2024-12-09 23:26:09.389808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:44:31.882 [2024-12-09 23:26:09.389816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:44:31.882 [2024-12-09 23:26:09.389824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:44:31.883 [2024-12-09 23:26:09.389832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:44:31.883 [2024-12-09 23:26:09.389839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:44:31.883 [2024-12-09 23:26:09.389846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:44:31.883 [2024-12-09 23:26:09.389856] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:44:31.883 [2024-12-09 23:26:09.389864] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: f00f46ed-ee91-4188-a824-8debf9a8e345 00:44:31.883 [2024-12-09 23:26:09.389872] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:44:31.883 [2024-12-09 23:26:09.389879] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:44:31.883 [2024-12-09 23:26:09.389886] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:44:31.883 [2024-12-09 23:26:09.389895] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:44:31.883 [2024-12-09 23:26:09.389902] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:44:31.883 [2024-12-09 23:26:09.389913] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:44:31.883 [2024-12-09 23:26:09.389921] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:44:31.883 [2024-12-09 23:26:09.389927] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:44:31.883 [2024-12-09 23:26:09.389934] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:44:31.883 [2024-12-09 23:26:09.389949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:44:31.883 [2024-12-09 23:26:09.389962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:44:31.883 [2024-12-09 23:26:09.389971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.266 ms 00:44:31.883 [2024-12-09 23:26:09.389980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:44:31.883 [2024-12-09 23:26:09.403724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:44:31.883 [2024-12-09 23:26:09.403765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:44:31.883 [2024-12-09 23:26:09.403777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.712 ms 00:44:31.883 [2024-12-09 23:26:09.403792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:44:31.883 [2024-12-09 23:26:09.404178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:44:31.883 [2024-12-09 23:26:09.404195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:44:31.883 [2024-12-09 23:26:09.404205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.363 ms 00:44:31.883 [2024-12-09 23:26:09.404213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:44:31.883 [2024-12-09 23:26:09.450297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:44:31.883 [2024-12-09 23:26:09.450340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:44:31.883 [2024-12-09 23:26:09.450358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:44:31.883 [2024-12-09 23:26:09.450367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:44:31.883 [2024-12-09 23:26:09.450403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:44:31.883 [2024-12-09 23:26:09.450412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:44:31.883 [2024-12-09 23:26:09.450421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:44:31.883 [2024-12-09 23:26:09.450429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:44:31.883 [2024-12-09 23:26:09.450523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:44:31.883 [2024-12-09 23:26:09.450535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:44:31.883 [2024-12-09 23:26:09.450543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:44:31.883 [2024-12-09 23:26:09.450557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:44:31.883 [2024-12-09 23:26:09.450575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:44:31.883 [2024-12-09 23:26:09.450589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:44:31.883 [2024-12-09 23:26:09.450598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:44:31.883 [2024-12-09 23:26:09.450606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:44:31.883 [2024-12-09 23:26:09.535833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:44:31.883 [2024-12-09 23:26:09.535886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:44:31.883 [2024-12-09 23:26:09.535899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:44:31.883 [2024-12-09 23:26:09.535914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:44:31.883 [2024-12-09 23:26:09.605735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:44:31.883 [2024-12-09 23:26:09.605952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:44:31.883 [2024-12-09 23:26:09.605974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:44:31.883 [2024-12-09 23:26:09.605986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:44:31.883 [2024-12-09 23:26:09.606080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:44:31.883 [2024-12-09 23:26:09.606091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:44:31.883 [2024-12-09 23:26:09.606100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:44:31.883 [2024-12-09 23:26:09.606109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:44:31.883 [2024-12-09 23:26:09.606184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:44:31.883 [2024-12-09 23:26:09.606196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:44:31.883 [2024-12-09 23:26:09.606205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:44:31.883 [2024-12-09 23:26:09.606214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:44:31.883 [2024-12-09 23:26:09.606358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:44:31.883 [2024-12-09 23:26:09.606369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:44:31.883 [2024-12-09 23:26:09.606379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:44:31.883 [2024-12-09 23:26:09.606388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:44:31.883 [2024-12-09 23:26:09.606421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:44:31.883 [2024-12-09 23:26:09.606434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:44:31.883 [2024-12-09 23:26:09.606443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:44:31.883 [2024-12-09 23:26:09.606451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:44:31.883 [2024-12-09 23:26:09.606493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:44:31.883 [2024-12-09 23:26:09.606503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:44:31.883 [2024-12-09 23:26:09.606512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:44:31.883 [2024-12-09 23:26:09.606521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:44:31.883 [2024-12-09 23:26:09.606574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:44:31.883 [2024-12-09 23:26:09.606585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:44:31.883 [2024-12-09 23:26:09.606594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:44:31.883 [2024-12-09 23:26:09.606602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:44:31.883 [2024-12-09 23:26:09.606739] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 9520.069 ms, result 0 00:44:33.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:33.270 23:26:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:44:33.270 23:26:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:44:33.270 23:26:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:44:33.270 23:26:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:44:33.270 23:26:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:44:33.270 23:26:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=82778 00:44:33.270 23:26:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:44:33.270 23:26:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 82778 00:44:33.270 23:26:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 82778 ']' 00:44:33.270 23:26:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:33.270 23:26:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:44:33.270 23:26:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:33.270 23:26:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:44:33.270 23:26:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:44:33.270 23:26:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:44:33.270 [2024-12-09 23:26:11.567535] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:44:33.270 [2024-12-09 23:26:11.567689] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82778 ] 00:44:33.541 [2024-12-09 23:26:11.733477] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:33.541 [2024-12-09 23:26:11.861337] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:44:34.493 [2024-12-09 23:26:12.664448] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:44:34.493 [2024-12-09 23:26:12.664718] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:44:34.493 [2024-12-09 23:26:12.817599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:44:34.493 [2024-12-09 23:26:12.817655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:44:34.493 [2024-12-09 23:26:12.817669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:44:34.493 [2024-12-09 23:26:12.817678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:44:34.493 [2024-12-09 23:26:12.817737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:44:34.493 [2024-12-09 23:26:12.817748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:44:34.493 [2024-12-09 23:26:12.817757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.039 ms 00:44:34.493 [2024-12-09 23:26:12.817765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:44:34.493 [2024-12-09 23:26:12.817792] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:44:34.493 [2024-12-09 23:26:12.818543] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:44:34.493 [2024-12-09 23:26:12.818571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:44:34.493 [2024-12-09 23:26:12.818582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:44:34.493 [2024-12-09 23:26:12.818591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.788 ms 00:44:34.493 [2024-12-09 23:26:12.818600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:44:34.493 [2024-12-09 23:26:12.820322] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:44:34.493 [2024-12-09 23:26:12.834647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:44:34.493 [2024-12-09 23:26:12.834695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:44:34.493 [2024-12-09 23:26:12.834717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.327 ms 00:44:34.493 [2024-12-09 23:26:12.834725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:44:34.493 [2024-12-09 23:26:12.834801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:44:34.493 [2024-12-09 23:26:12.834811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:44:34.493 [2024-12-09 23:26:12.834820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.027 ms 00:44:34.493 [2024-12-09 23:26:12.834828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:44:34.493 [2024-12-09 23:26:12.843106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:44:34.493 [2024-12-09 23:26:12.843148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:44:34.493 [2024-12-09 23:26:12.843160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.193 ms 00:44:34.493 [2024-12-09 23:26:12.843169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:44:34.493 [2024-12-09 23:26:12.843258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:44:34.493 [2024-12-09 23:26:12.843269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:44:34.493 [2024-12-09 23:26:12.843278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.067 ms 00:44:34.493 [2024-12-09 23:26:12.843285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:44:34.493 [2024-12-09 23:26:12.843333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:44:34.493 [2024-12-09 23:26:12.843347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:44:34.493 [2024-12-09 23:26:12.843355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:44:34.493 [2024-12-09 23:26:12.843363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:44:34.493 [2024-12-09 23:26:12.843389] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:44:34.493 [2024-12-09 23:26:12.847555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:44:34.493 [2024-12-09 23:26:12.847593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:44:34.493 [2024-12-09 23:26:12.847604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.171 ms 00:44:34.493 [2024-12-09 23:26:12.847616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:44:34.493 [2024-12-09 23:26:12.847649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:44:34.493 [2024-12-09 23:26:12.847658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:44:34.493 [2024-12-09 23:26:12.847666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:44:34.493 [2024-12-09 23:26:12.847675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:44:34.493 [2024-12-09 23:26:12.847726] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:44:34.493 [2024-12-09 23:26:12.847753] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:44:34.493 [2024-12-09 23:26:12.847794] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:44:34.493 [2024-12-09 23:26:12.847809] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:44:34.493 [2024-12-09 23:26:12.847915] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:44:34.493 [2024-12-09 23:26:12.847927] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:44:34.493 [2024-12-09 23:26:12.847938] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:44:34.493 [2024-12-09 23:26:12.847948] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:44:34.493 [2024-12-09 23:26:12.847957] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:44:34.493 [2024-12-09 23:26:12.847969] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:44:34.493 [2024-12-09 23:26:12.847976] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:44:34.493 [2024-12-09 23:26:12.847984] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:44:34.493 [2024-12-09 23:26:12.847993] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:44:34.493 [2024-12-09 23:26:12.848001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:44:34.493 [2024-12-09 23:26:12.848009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:44:34.493 [2024-12-09 23:26:12.848016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.278 ms 00:44:34.493 [2024-12-09 23:26:12.848024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:44:34.493 [2024-12-09 23:26:12.848108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:44:34.493 [2024-12-09 23:26:12.848117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:44:34.493 [2024-12-09 23:26:12.848126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.069 ms 00:44:34.493 [2024-12-09 23:26:12.848133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:44:34.493 [2024-12-09 23:26:12.848249] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:44:34.493 [2024-12-09 23:26:12.848261] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:44:34.493 [2024-12-09 23:26:12.848270] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:44:34.493 [2024-12-09 23:26:12.848277] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:44:34.493 [2024-12-09 23:26:12.848286] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:44:34.493 [2024-12-09 23:26:12.848293] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:44:34.493 [2024-12-09 23:26:12.848300] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:44:34.493 [2024-12-09 23:26:12.848307] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:44:34.493 [2024-12-09 23:26:12.848316] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:44:34.493 [2024-12-09 23:26:12.848324] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:44:34.493 [2024-12-09 23:26:12.848331] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:44:34.493 [2024-12-09 23:26:12.848338] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:44:34.493 [2024-12-09 23:26:12.848348] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:44:34.493 [2024-12-09 23:26:12.848356] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:44:34.493 [2024-12-09 23:26:12.848363] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:44:34.493 [2024-12-09 23:26:12.848369] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:44:34.493 [2024-12-09 23:26:12.848376] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:44:34.493 [2024-12-09 23:26:12.848383] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:44:34.493 [2024-12-09 23:26:12.848389] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:44:34.493 [2024-12-09 23:26:12.848396] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:44:34.493 [2024-12-09 23:26:12.848403] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:44:34.493 [2024-12-09 23:26:12.848410] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:44:34.493 [2024-12-09 23:26:12.848417] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:44:34.493 [2024-12-09 23:26:12.848432] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:44:34.493 [2024-12-09 23:26:12.848438] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:44:34.494 [2024-12-09 23:26:12.848445] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:44:34.494 [2024-12-09 23:26:12.848452] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:44:34.494 [2024-12-09 23:26:12.848458] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:44:34.494 [2024-12-09 23:26:12.848465] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:44:34.494 [2024-12-09 23:26:12.848472] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:44:34.494 [2024-12-09 23:26:12.848478] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:44:34.494 [2024-12-09 23:26:12.848485] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:44:34.494 [2024-12-09 23:26:12.848491] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:44:34.494 [2024-12-09 23:26:12.848498] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:44:34.494 [2024-12-09 23:26:12.848505] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:44:34.494 [2024-12-09 23:26:12.848511] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:44:34.494 [2024-12-09 23:26:12.848518] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:44:34.494 [2024-12-09 23:26:12.848525] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:44:34.494 [2024-12-09 23:26:12.848532] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:44:34.494 [2024-12-09 23:26:12.848538] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:44:34.494 [2024-12-09 23:26:12.848544] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:44:34.494 [2024-12-09 23:26:12.848550] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:44:34.494 [2024-12-09 23:26:12.848557] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:44:34.494 [2024-12-09 23:26:12.848564] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:44:34.494 [2024-12-09 23:26:12.848573] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:44:34.494 [2024-12-09 23:26:12.848581] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:44:34.494 [2024-12-09 23:26:12.848589] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:44:34.494 [2024-12-09 23:26:12.848599] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:44:34.494 [2024-12-09 23:26:12.848606] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:44:34.494 [2024-12-09 23:26:12.848614] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:44:34.494 [2024-12-09 23:26:12.848621] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:44:34.494 [2024-12-09 23:26:12.848628] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:44:34.494 [2024-12-09 23:26:12.848634] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:44:34.494 [2024-12-09 23:26:12.848642] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:44:34.494 [2024-12-09 23:26:12.848652] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:44:34.494 [2024-12-09 23:26:12.848662] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:44:34.494 [2024-12-09 23:26:12.848669] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:44:34.494 [2024-12-09 23:26:12.848677] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:44:34.494 [2024-12-09 23:26:12.848683] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:44:34.494 [2024-12-09 23:26:12.848691] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:44:34.494 [2024-12-09 23:26:12.848697] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:44:34.494 [2024-12-09 23:26:12.848704] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:44:34.494 [2024-12-09 23:26:12.848711] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:44:34.494 [2024-12-09 23:26:12.848718] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:44:34.494 [2024-12-09 23:26:12.848726] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:44:34.494 [2024-12-09 23:26:12.848733] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:44:34.494 [2024-12-09 23:26:12.848740] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:44:34.494 [2024-12-09 23:26:12.848747] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:44:34.494 [2024-12-09 23:26:12.848754] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:44:34.494 [2024-12-09 23:26:12.848761] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:44:34.494 [2024-12-09 23:26:12.848770] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:44:34.494 [2024-12-09 23:26:12.848779] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:44:34.494 [2024-12-09 23:26:12.848787] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:44:34.494 [2024-12-09 23:26:12.848793] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:44:34.494 [2024-12-09 23:26:12.848800] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:44:34.494 [2024-12-09 23:26:12.848808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:44:34.494 [2024-12-09 23:26:12.848817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:44:34.494 [2024-12-09 23:26:12.848824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.642 ms 00:44:34.494 [2024-12-09 23:26:12.848832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:44:34.494 [2024-12-09 23:26:12.848874] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:44:34.494 [2024-12-09 23:26:12.848884] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:44:39.777 [2024-12-09 23:26:17.378534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:44:39.777 [2024-12-09 23:26:17.378589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:44:39.777 [2024-12-09 23:26:17.378604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4529.646 ms 00:44:39.777 [2024-12-09 23:26:17.378613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:44:39.777 [2024-12-09 23:26:17.404090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:44:39.777 [2024-12-09 23:26:17.404285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:44:39.777 [2024-12-09 23:26:17.404304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.274 ms 00:44:39.777 [2024-12-09 23:26:17.404313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:44:39.777 [2024-12-09 23:26:17.404395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:44:39.778 [2024-12-09 23:26:17.404410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:44:39.778 [2024-12-09 23:26:17.404419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:44:39.778 [2024-12-09 23:26:17.404426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:44:39.778 [2024-12-09 23:26:17.434734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:44:39.778 [2024-12-09 23:26:17.434769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:44:39.778 [2024-12-09 23:26:17.434783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 30.272 ms 00:44:39.778 [2024-12-09 23:26:17.434790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:44:39.778 [2024-12-09 23:26:17.434818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:44:39.778 [2024-12-09 23:26:17.434826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:44:39.778 [2024-12-09 23:26:17.434834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:44:39.778 [2024-12-09 23:26:17.434841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:44:39.778 [2024-12-09 23:26:17.435212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:44:39.778 [2024-12-09 23:26:17.435249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:44:39.778 [2024-12-09 23:26:17.435258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.304 ms 00:44:39.778 [2024-12-09 23:26:17.435266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:44:39.778 [2024-12-09 23:26:17.435310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:44:39.778 [2024-12-09 23:26:17.435319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:44:39.778 [2024-12-09 23:26:17.435328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:44:39.778 [2024-12-09 23:26:17.435335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:44:39.778 [2024-12-09 23:26:17.449439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:44:39.778 [2024-12-09 23:26:17.449470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:44:39.778 [2024-12-09 23:26:17.449480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.082 ms 00:44:39.778 [2024-12-09 23:26:17.449487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:44:39.778 [2024-12-09 23:26:17.474650] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:44:39.778 [2024-12-09 23:26:17.474690] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:44:39.778 [2024-12-09 23:26:17.474704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:44:39.778 [2024-12-09 23:26:17.474713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:44:39.778 [2024-12-09 23:26:17.474723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.124 ms 00:44:39.778 [2024-12-09 23:26:17.474730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:44:39.778 [2024-12-09 23:26:17.488171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:44:39.778 [2024-12-09 23:26:17.488205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:44:39.778 [2024-12-09 23:26:17.488235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.400 ms 00:44:39.778 [2024-12-09 23:26:17.488245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:44:39.778 [2024-12-09 23:26:17.500086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:44:39.778 [2024-12-09 23:26:17.500116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:44:39.778 [2024-12-09 23:26:17.500127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.802 ms 00:44:39.778 [2024-12-09 23:26:17.500133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:44:39.778 [2024-12-09 23:26:17.511823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:44:39.778 [2024-12-09 23:26:17.511853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:44:39.778 [2024-12-09 23:26:17.511862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.656 ms 00:44:39.778 [2024-12-09 23:26:17.511870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:44:39.778 [2024-12-09 23:26:17.512492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:44:39.778 [2024-12-09 23:26:17.512511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:44:39.778 [2024-12-09 23:26:17.512520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.534 ms 00:44:39.778 [2024-12-09 23:26:17.512528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:44:39.778 [2024-12-09 23:26:17.569099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:44:39.778 [2024-12-09 23:26:17.569143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:44:39.778 [2024-12-09 23:26:17.569156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 56.553 ms 00:44:39.778 [2024-12-09 23:26:17.569163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:44:39.778 [2024-12-09 23:26:17.579817] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:44:39.778 [2024-12-09 23:26:17.580525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:44:39.778 [2024-12-09 23:26:17.580585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:44:39.778 [2024-12-09 23:26:17.580598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.320 ms 00:44:39.778 [2024-12-09 23:26:17.580606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:44:39.778 [2024-12-09 23:26:17.580686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:44:39.778 [2024-12-09 23:26:17.580699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:44:39.778 [2024-12-09 23:26:17.580707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:44:39.778 [2024-12-09 23:26:17.580714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:44:39.778 [2024-12-09 23:26:17.580771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:44:39.778 [2024-12-09 23:26:17.580781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:44:39.778 [2024-12-09 23:26:17.580790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:44:39.778 [2024-12-09 23:26:17.580797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:44:39.778 [2024-12-09 23:26:17.580816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:44:39.778 [2024-12-09 23:26:17.580825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:44:39.778 [2024-12-09 23:26:17.580835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:44:39.778 [2024-12-09 23:26:17.580842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:44:39.778 [2024-12-09 23:26:17.580873] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:44:39.778 [2024-12-09 23:26:17.580882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:44:39.778 [2024-12-09 23:26:17.580890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:44:39.778 [2024-12-09 23:26:17.580898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:44:39.778 [2024-12-09 23:26:17.580905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:44:39.778 [2024-12-09 23:26:17.604163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:44:39.778 [2024-12-09 23:26:17.604314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:44:39.778 [2024-12-09 23:26:17.604332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.239 ms 00:44:39.778 [2024-12-09 23:26:17.604340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:44:39.778 [2024-12-09 23:26:17.604403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:44:39.778 [2024-12-09 23:26:17.604412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:44:39.778 [2024-12-09 23:26:17.604420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:44:39.778 [2024-12-09 23:26:17.604428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:44:39.778 [2024-12-09 23:26:17.605328] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 4787.305 ms, result 0 00:44:39.778 [2024-12-09 23:26:17.620632] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:44:39.778 [2024-12-09 23:26:17.636617] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:44:39.778 [2024-12-09 23:26:17.644737] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:44:39.778 23:26:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:44:39.778 23:26:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:44:39.778 23:26:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:44:39.778 23:26:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:44:39.778 23:26:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:44:39.778 [2024-12-09 23:26:17.876792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:44:39.778 [2024-12-09 23:26:17.876835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:44:39.778 [2024-12-09 23:26:17.876851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:44:39.778 [2024-12-09 23:26:17.876860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:44:39.778 [2024-12-09 23:26:17.876882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:44:39.778 [2024-12-09 23:26:17.876891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:44:39.778 [2024-12-09 23:26:17.876898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:44:39.778 [2024-12-09 23:26:17.876906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:44:39.778 [2024-12-09 23:26:17.876926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:44:39.778 [2024-12-09 23:26:17.876934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:44:39.778 [2024-12-09 23:26:17.876941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:44:39.778 [2024-12-09 23:26:17.876948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:44:39.778 [2024-12-09 23:26:17.877007] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.202 ms, result 0 00:44:39.778 true 00:44:39.778 23:26:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:44:39.778 { 00:44:39.778 "name": "ftl", 00:44:39.778 "properties": [ 00:44:39.778 { 00:44:39.778 "name": "superblock_version", 00:44:39.778 "value": 5, 00:44:39.778 "read-only": true 00:44:39.778 }, 00:44:39.778 { 00:44:39.778 "name": "base_device", 00:44:39.778 "bands": [ 00:44:39.778 { 00:44:39.778 "id": 0, 00:44:39.778 "state": "CLOSED", 00:44:39.778 "validity": 1.0 00:44:39.778 }, 00:44:39.778 { 00:44:39.778 "id": 1, 00:44:39.778 "state": "CLOSED", 00:44:39.778 "validity": 1.0 00:44:39.778 }, 00:44:39.779 { 00:44:39.779 "id": 2, 00:44:39.779 "state": "CLOSED", 00:44:39.779 "validity": 0.007843137254901933 00:44:39.779 }, 00:44:39.779 { 00:44:39.779 "id": 3, 00:44:39.779 "state": "FREE", 00:44:39.779 "validity": 0.0 00:44:39.779 }, 00:44:39.779 { 00:44:39.779 "id": 4, 00:44:39.779 "state": "FREE", 00:44:39.779 "validity": 0.0 00:44:39.779 }, 00:44:39.779 { 00:44:39.779 "id": 5, 00:44:39.779 "state": "FREE", 00:44:39.779 "validity": 0.0 00:44:39.779 }, 00:44:39.779 { 00:44:39.779 "id": 6, 00:44:39.779 "state": "FREE", 00:44:39.779 "validity": 0.0 00:44:39.779 }, 00:44:39.779 { 00:44:39.779 "id": 7, 00:44:39.779 "state": "FREE", 00:44:39.779 "validity": 0.0 00:44:39.779 }, 00:44:39.779 { 00:44:39.779 "id": 8, 00:44:39.779 "state": "FREE", 00:44:39.779 "validity": 0.0 00:44:39.779 }, 00:44:39.779 { 00:44:39.779 "id": 9, 00:44:39.779 "state": "FREE", 00:44:39.779 "validity": 0.0 00:44:39.779 }, 00:44:39.779 { 00:44:39.779 "id": 10, 00:44:39.779 "state": "FREE", 00:44:39.779 "validity": 0.0 00:44:39.779 }, 00:44:39.779 { 00:44:39.779 "id": 11, 00:44:39.779 "state": "FREE", 00:44:39.779 "validity": 0.0 00:44:39.779 }, 00:44:39.779 { 00:44:39.779 "id": 12, 00:44:39.779 "state": "FREE", 00:44:39.779 "validity": 0.0 00:44:39.779 }, 00:44:39.779 { 00:44:39.779 "id": 13, 00:44:39.779 "state": "FREE", 00:44:39.779 "validity": 0.0 00:44:39.779 }, 00:44:39.779 { 00:44:39.779 "id": 14, 00:44:39.779 "state": "FREE", 00:44:39.779 "validity": 0.0 00:44:39.779 }, 00:44:39.779 { 00:44:39.779 "id": 15, 00:44:39.779 "state": "FREE", 00:44:39.779 "validity": 0.0 00:44:39.779 }, 00:44:39.779 { 00:44:39.779 "id": 16, 00:44:39.779 "state": "FREE", 00:44:39.779 "validity": 0.0 00:44:39.779 }, 00:44:39.779 { 00:44:39.779 "id": 17, 00:44:39.779 "state": "FREE", 00:44:39.779 "validity": 0.0 00:44:39.779 } 00:44:39.779 ], 00:44:39.779 "read-only": true 00:44:39.779 }, 00:44:39.779 { 00:44:39.779 "name": "cache_device", 00:44:39.779 "type": "bdev", 00:44:39.779 "chunks": [ 00:44:39.779 { 00:44:39.779 "id": 0, 00:44:39.779 "state": "INACTIVE", 00:44:39.779 "utilization": 0.0 00:44:39.779 }, 00:44:39.779 { 00:44:39.779 "id": 1, 00:44:39.779 "state": "OPEN", 00:44:39.779 "utilization": 0.0 00:44:39.779 }, 00:44:39.779 { 00:44:39.779 "id": 2, 00:44:39.779 "state": "OPEN", 00:44:39.779 "utilization": 0.0 00:44:39.779 }, 00:44:39.779 { 00:44:39.779 "id": 3, 00:44:39.779 "state": "FREE", 00:44:39.779 "utilization": 0.0 00:44:39.779 }, 00:44:39.779 { 00:44:39.779 "id": 4, 00:44:39.779 "state": "FREE", 00:44:39.779 "utilization": 0.0 00:44:39.779 } 00:44:39.779 ], 00:44:39.779 "read-only": true 00:44:39.779 }, 00:44:39.779 { 00:44:39.779 "name": "verbose_mode", 00:44:39.779 "value": true, 00:44:39.779 "unit": "", 00:44:39.779 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:44:39.779 }, 00:44:39.779 { 00:44:39.779 "name": "prep_upgrade_on_shutdown", 00:44:39.779 "value": false, 00:44:39.779 "unit": "", 00:44:39.779 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:44:39.779 } 00:44:39.779 ] 00:44:39.779 } 00:44:39.779 23:26:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:44:39.779 23:26:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:44:39.779 23:26:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:44:40.038 23:26:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:44:40.038 23:26:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:44:40.038 23:26:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:44:40.038 23:26:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:44:40.038 23:26:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:44:40.296 23:26:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:44:40.296 23:26:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:44:40.296 23:26:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:44:40.296 23:26:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:44:40.296 Validate MD5 checksum, iteration 1 00:44:40.296 23:26:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:44:40.296 23:26:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:44:40.296 23:26:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:44:40.296 23:26:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:44:40.296 23:26:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:44:40.296 23:26:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:44:40.296 23:26:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:44:40.296 23:26:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:44:40.296 23:26:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:44:40.296 [2024-12-09 23:26:18.591824] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:44:40.296 [2024-12-09 23:26:18.592083] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82871 ] 00:44:40.296 [2024-12-09 23:26:18.749450] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:40.555 [2024-12-09 23:26:18.847928] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:44:41.929  [2024-12-09T23:26:21.325Z] Copying: 612/1024 [MB] (612 MBps) [2024-12-09T23:26:22.259Z] Copying: 1024/1024 [MB] (average 608 MBps) 00:44:43.797 00:44:43.797 23:26:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:44:43.797 23:26:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:44:46.331 23:26:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:44:46.331 Validate MD5 checksum, iteration 2 00:44:46.331 23:26:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=1689d398e8e87a4d40bc8a8169540af9 00:44:46.331 23:26:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 1689d398e8e87a4d40bc8a8169540af9 != \1\6\8\9\d\3\9\8\e\8\e\8\7\a\4\d\4\0\b\c\8\a\8\1\6\9\5\4\0\a\f\9 ]] 00:44:46.331 23:26:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:44:46.331 23:26:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:44:46.331 23:26:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:44:46.331 23:26:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:44:46.331 23:26:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:44:46.331 23:26:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:44:46.331 23:26:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:44:46.331 23:26:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:44:46.331 23:26:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:44:46.331 [2024-12-09 23:26:24.462983] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:44:46.331 [2024-12-09 23:26:24.463239] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82938 ] 00:44:46.331 [2024-12-09 23:26:24.623598] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:46.331 [2024-12-09 23:26:24.719744] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:44:48.259  [2024-12-09T23:26:26.979Z] Copying: 650/1024 [MB] (650 MBps) [2024-12-09T23:26:30.273Z] Copying: 1024/1024 [MB] (average 638 MBps) 00:44:51.811 00:44:51.811 23:26:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:44:51.811 23:26:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:44:53.712 23:26:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:44:53.712 23:26:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=fdbc617fb55d47adf5f3935a8cd3e637 00:44:53.712 23:26:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ fdbc617fb55d47adf5f3935a8cd3e637 != \f\d\b\c\6\1\7\f\b\5\5\d\4\7\a\d\f\5\f\3\9\3\5\a\8\c\d\3\e\6\3\7 ]] 00:44:53.712 23:26:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:44:53.712 23:26:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:44:53.712 23:26:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:44:53.712 23:26:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 82778 ]] 00:44:53.712 23:26:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 82778 00:44:53.712 23:26:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:44:53.712 23:26:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:44:53.712 23:26:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:44:53.712 23:26:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:44:53.712 23:26:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:44:53.712 23:26:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=83016 00:44:53.712 23:26:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:44:53.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:53.712 23:26:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 83016 00:44:53.712 23:26:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 83016 ']' 00:44:53.712 23:26:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:53.712 23:26:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:44:53.712 23:26:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:44:53.712 23:26:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:53.712 23:26:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:44:53.712 23:26:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:44:53.712 [2024-12-09 23:26:31.989246] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:44:53.712 [2024-12-09 23:26:31.989884] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83016 ] 00:44:53.712 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: 82778 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:44:53.712 [2024-12-09 23:26:32.146963] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:53.971 [2024-12-09 23:26:32.242650] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:44:54.539 [2024-12-09 23:26:32.932356] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:44:54.539 [2024-12-09 23:26:32.932425] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:44:54.801 [2024-12-09 23:26:33.084565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:44:54.801 [2024-12-09 23:26:33.084618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:44:54.801 [2024-12-09 23:26:33.084633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:44:54.801 [2024-12-09 23:26:33.084642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:44:54.801 [2024-12-09 23:26:33.084703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:44:54.801 [2024-12-09 23:26:33.084713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:44:54.801 [2024-12-09 23:26:33.084722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.040 ms 00:44:54.801 [2024-12-09 23:26:33.084730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:44:54.801 [2024-12-09 23:26:33.084757] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:44:54.801 [2024-12-09 23:26:33.085489] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:44:54.801 [2024-12-09 23:26:33.085509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:44:54.801 [2024-12-09 23:26:33.085517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:44:54.801 [2024-12-09 23:26:33.085526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.762 ms 00:44:54.801 [2024-12-09 23:26:33.085534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:44:54.801 [2024-12-09 23:26:33.085818] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:44:54.801 [2024-12-09 23:26:33.103824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:44:54.801 [2024-12-09 23:26:33.103875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:44:54.801 [2024-12-09 23:26:33.103887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.006 ms 00:44:54.801 [2024-12-09 23:26:33.103895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:44:54.801 [2024-12-09 23:26:33.113552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:44:54.801 [2024-12-09 23:26:33.113735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:44:54.801 [2024-12-09 23:26:33.113754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:44:54.801 [2024-12-09 23:26:33.113763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:44:54.801 [2024-12-09 23:26:33.114110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:44:54.801 [2024-12-09 23:26:33.114123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:44:54.801 [2024-12-09 23:26:33.114133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.256 ms 00:44:54.801 [2024-12-09 23:26:33.114142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:44:54.801 [2024-12-09 23:26:33.114198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:44:54.801 [2024-12-09 23:26:33.114208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:44:54.801 [2024-12-09 23:26:33.114242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.037 ms 00:44:54.802 [2024-12-09 23:26:33.114251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:44:54.802 [2024-12-09 23:26:33.114277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:44:54.802 [2024-12-09 23:26:33.114287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:44:54.802 [2024-12-09 23:26:33.114302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:44:54.802 [2024-12-09 23:26:33.114310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:44:54.802 [2024-12-09 23:26:33.114334] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:44:54.802 [2024-12-09 23:26:33.117568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:44:54.802 [2024-12-09 23:26:33.117605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:44:54.802 [2024-12-09 23:26:33.117616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.239 ms 00:44:54.802 [2024-12-09 23:26:33.117623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:44:54.802 [2024-12-09 23:26:33.117656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:44:54.802 [2024-12-09 23:26:33.117665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:44:54.802 [2024-12-09 23:26:33.117673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:44:54.802 [2024-12-09 23:26:33.117680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:44:54.802 [2024-12-09 23:26:33.117717] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:44:54.802 [2024-12-09 23:26:33.117740] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:44:54.802 [2024-12-09 23:26:33.117775] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:44:54.802 [2024-12-09 23:26:33.117795] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:44:54.802 [2024-12-09 23:26:33.117899] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:44:54.802 [2024-12-09 23:26:33.117910] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:44:54.802 [2024-12-09 23:26:33.117921] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:44:54.802 [2024-12-09 23:26:33.117931] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:44:54.802 [2024-12-09 23:26:33.117941] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:44:54.802 [2024-12-09 23:26:33.117949] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:44:54.802 [2024-12-09 23:26:33.117957] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:44:54.802 [2024-12-09 23:26:33.117964] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:44:54.802 [2024-12-09 23:26:33.117971] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:44:54.802 [2024-12-09 23:26:33.117982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:44:54.802 [2024-12-09 23:26:33.117990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:44:54.802 [2024-12-09 23:26:33.117998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.267 ms 00:44:54.802 [2024-12-09 23:26:33.118006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:44:54.802 [2024-12-09 23:26:33.118090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:44:54.802 [2024-12-09 23:26:33.118098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:44:54.802 [2024-12-09 23:26:33.118107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.069 ms 00:44:54.802 [2024-12-09 23:26:33.118114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:44:54.802 [2024-12-09 23:26:33.118214] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:44:54.802 [2024-12-09 23:26:33.118244] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:44:54.802 [2024-12-09 23:26:33.118252] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:44:54.802 [2024-12-09 23:26:33.118260] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:44:54.802 [2024-12-09 23:26:33.118268] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:44:54.802 [2024-12-09 23:26:33.118275] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:44:54.802 [2024-12-09 23:26:33.118282] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:44:54.802 [2024-12-09 23:26:33.118290] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:44:54.802 [2024-12-09 23:26:33.118296] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:44:54.802 [2024-12-09 23:26:33.118303] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:44:54.802 [2024-12-09 23:26:33.118310] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:44:54.802 [2024-12-09 23:26:33.118316] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:44:54.802 [2024-12-09 23:26:33.118329] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:44:54.802 [2024-12-09 23:26:33.118336] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:44:54.802 [2024-12-09 23:26:33.118343] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:44:54.802 [2024-12-09 23:26:33.118349] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:44:54.802 [2024-12-09 23:26:33.118356] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:44:54.802 [2024-12-09 23:26:33.118363] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:44:54.802 [2024-12-09 23:26:33.118369] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:44:54.802 [2024-12-09 23:26:33.118377] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:44:54.802 [2024-12-09 23:26:33.118385] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:44:54.802 [2024-12-09 23:26:33.118398] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:44:54.802 [2024-12-09 23:26:33.118405] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:44:54.802 [2024-12-09 23:26:33.118412] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:44:54.802 [2024-12-09 23:26:33.118419] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:44:54.802 [2024-12-09 23:26:33.118425] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:44:54.802 [2024-12-09 23:26:33.118433] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:44:54.802 [2024-12-09 23:26:33.118439] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:44:54.802 [2024-12-09 23:26:33.118446] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:44:54.802 [2024-12-09 23:26:33.118453] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:44:54.802 [2024-12-09 23:26:33.118460] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:44:54.802 [2024-12-09 23:26:33.118466] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:44:54.802 [2024-12-09 23:26:33.118473] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:44:54.802 [2024-12-09 23:26:33.118479] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:44:54.802 [2024-12-09 23:26:33.118486] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:44:54.802 [2024-12-09 23:26:33.118492] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:44:54.802 [2024-12-09 23:26:33.118499] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:44:54.802 [2024-12-09 23:26:33.118505] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:44:54.802 [2024-12-09 23:26:33.118512] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:44:54.802 [2024-12-09 23:26:33.118518] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:44:54.802 [2024-12-09 23:26:33.118525] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:44:54.802 [2024-12-09 23:26:33.118532] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:44:54.802 [2024-12-09 23:26:33.118539] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:44:54.802 [2024-12-09 23:26:33.118545] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:44:54.802 [2024-12-09 23:26:33.118555] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:44:54.802 [2024-12-09 23:26:33.118563] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:44:54.802 [2024-12-09 23:26:33.118571] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:44:54.802 [2024-12-09 23:26:33.118578] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:44:54.802 [2024-12-09 23:26:33.118586] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:44:54.802 [2024-12-09 23:26:33.118593] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:44:54.802 [2024-12-09 23:26:33.118600] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:44:54.802 [2024-12-09 23:26:33.118607] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:44:54.802 [2024-12-09 23:26:33.118614] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:44:54.802 [2024-12-09 23:26:33.118623] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:44:54.802 [2024-12-09 23:26:33.118632] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:44:54.802 [2024-12-09 23:26:33.118641] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:44:54.802 [2024-12-09 23:26:33.118648] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:44:54.802 [2024-12-09 23:26:33.118655] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:44:54.802 [2024-12-09 23:26:33.118663] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:44:54.802 [2024-12-09 23:26:33.118670] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:44:54.802 [2024-12-09 23:26:33.118676] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:44:54.802 [2024-12-09 23:26:33.118683] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:44:54.802 [2024-12-09 23:26:33.118690] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:44:54.802 [2024-12-09 23:26:33.118697] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:44:54.802 [2024-12-09 23:26:33.118704] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:44:54.802 [2024-12-09 23:26:33.118711] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:44:54.802 [2024-12-09 23:26:33.118718] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:44:54.802 [2024-12-09 23:26:33.118726] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:44:54.802 [2024-12-09 23:26:33.118733] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:44:54.803 [2024-12-09 23:26:33.118739] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:44:54.803 [2024-12-09 23:26:33.118747] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:44:54.803 [2024-12-09 23:26:33.118758] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:44:54.803 [2024-12-09 23:26:33.118765] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:44:54.803 [2024-12-09 23:26:33.118772] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:44:54.803 [2024-12-09 23:26:33.118779] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:44:54.803 [2024-12-09 23:26:33.118787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:44:54.803 [2024-12-09 23:26:33.118797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:44:54.803 [2024-12-09 23:26:33.118806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.641 ms 00:44:54.803 [2024-12-09 23:26:33.118812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:44:54.803 [2024-12-09 23:26:33.147312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:44:54.803 [2024-12-09 23:26:33.147356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:44:54.803 [2024-12-09 23:26:33.147369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 28.447 ms 00:44:54.803 [2024-12-09 23:26:33.147377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:44:54.803 [2024-12-09 23:26:33.147423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:44:54.803 [2024-12-09 23:26:33.147432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:44:54.803 [2024-12-09 23:26:33.147442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:44:54.803 [2024-12-09 23:26:33.147449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:44:54.803 [2024-12-09 23:26:33.182675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:44:54.803 [2024-12-09 23:26:33.182721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:44:54.803 [2024-12-09 23:26:33.182733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 35.166 ms 00:44:54.803 [2024-12-09 23:26:33.182741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:44:54.803 [2024-12-09 23:26:33.182781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:44:54.803 [2024-12-09 23:26:33.182790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:44:54.803 [2024-12-09 23:26:33.182799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:44:54.803 [2024-12-09 23:26:33.182811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:44:54.803 [2024-12-09 23:26:33.182936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:44:54.803 [2024-12-09 23:26:33.182948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:44:54.803 [2024-12-09 23:26:33.182958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.056 ms 00:44:54.803 [2024-12-09 23:26:33.182966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:44:54.803 [2024-12-09 23:26:33.183011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:44:54.803 [2024-12-09 23:26:33.183020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:44:54.803 [2024-12-09 23:26:33.183028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:44:54.803 [2024-12-09 23:26:33.183036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:44:54.803 [2024-12-09 23:26:33.200378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:44:54.803 [2024-12-09 23:26:33.200569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:44:54.803 [2024-12-09 23:26:33.200589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.315 ms 00:44:54.803 [2024-12-09 23:26:33.200603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:44:54.803 [2024-12-09 23:26:33.200721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:44:54.803 [2024-12-09 23:26:33.200733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:44:54.803 [2024-12-09 23:26:33.200743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:44:54.803 [2024-12-09 23:26:33.200751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:44:54.803 [2024-12-09 23:26:33.233458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:44:54.803 [2024-12-09 23:26:33.233511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:44:54.803 [2024-12-09 23:26:33.233526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 32.686 ms 00:44:54.803 [2024-12-09 23:26:33.233535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:44:54.803 [2024-12-09 23:26:33.243434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:44:54.803 [2024-12-09 23:26:33.243476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:44:54.803 [2024-12-09 23:26:33.243497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.526 ms 00:44:54.803 [2024-12-09 23:26:33.243505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:44:55.063 [2024-12-09 23:26:33.309561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:44:55.063 [2024-12-09 23:26:33.309627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:44:55.063 [2024-12-09 23:26:33.309641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 65.985 ms 00:44:55.063 [2024-12-09 23:26:33.309651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:44:55.063 [2024-12-09 23:26:33.309810] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:44:55.063 [2024-12-09 23:26:33.309922] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:44:55.063 [2024-12-09 23:26:33.310035] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:44:55.063 [2024-12-09 23:26:33.310145] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:44:55.063 [2024-12-09 23:26:33.310156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:44:55.063 [2024-12-09 23:26:33.310165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:44:55.063 [2024-12-09 23:26:33.310176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.450 ms 00:44:55.064 [2024-12-09 23:26:33.310185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:44:55.064 [2024-12-09 23:26:33.310302] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:44:55.064 [2024-12-09 23:26:33.310316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:44:55.064 [2024-12-09 23:26:33.310327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:44:55.064 [2024-12-09 23:26:33.310338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:44:55.064 [2024-12-09 23:26:33.310346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:44:55.064 [2024-12-09 23:26:33.327488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:44:55.064 [2024-12-09 23:26:33.327544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:44:55.064 [2024-12-09 23:26:33.327557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.117 ms 00:44:55.064 [2024-12-09 23:26:33.327565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:44:55.064 [2024-12-09 23:26:33.336557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:44:55.064 [2024-12-09 23:26:33.336601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:44:55.064 [2024-12-09 23:26:33.336613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:44:55.064 [2024-12-09 23:26:33.336621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:44:55.064 [2024-12-09 23:26:33.336719] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:44:55.064 [2024-12-09 23:26:33.336932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:44:55.064 [2024-12-09 23:26:33.336947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:44:55.064 [2024-12-09 23:26:33.336957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.214 ms 00:44:55.064 [2024-12-09 23:26:33.336966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:44:56.016 [2024-12-09 23:26:34.260564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:44:56.016 [2024-12-09 23:26:34.260620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:44:56.016 [2024-12-09 23:26:34.260634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 922.635 ms 00:44:56.016 [2024-12-09 23:26:34.260642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:44:56.016 [2024-12-09 23:26:34.264991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:44:56.016 [2024-12-09 23:26:34.265024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:44:56.016 [2024-12-09 23:26:34.265034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.390 ms 00:44:56.016 [2024-12-09 23:26:34.265042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:44:56.016 [2024-12-09 23:26:34.265985] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:44:56.016 [2024-12-09 23:26:34.266016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:44:56.016 [2024-12-09 23:26:34.266025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:44:56.016 [2024-12-09 23:26:34.266034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.944 ms 00:44:56.016 [2024-12-09 23:26:34.266041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:44:56.016 [2024-12-09 23:26:34.266070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:44:56.016 [2024-12-09 23:26:34.266079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:44:56.016 [2024-12-09 23:26:34.266088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:44:56.016 [2024-12-09 23:26:34.266099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:44:56.016 [2024-12-09 23:26:34.266132] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 929.413 ms, result 0 00:44:56.016 [2024-12-09 23:26:34.266168] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:44:56.017 [2024-12-09 23:26:34.266282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:44:56.017 [2024-12-09 23:26:34.266292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:44:56.017 [2024-12-09 23:26:34.266301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.115 ms 00:44:56.017 [2024-12-09 23:26:34.266308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:44:56.953 [2024-12-09 23:26:35.091842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:44:56.953 [2024-12-09 23:26:35.092009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:44:56.953 [2024-12-09 23:26:35.092080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 824.688 ms 00:44:56.953 [2024-12-09 23:26:35.092106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:44:56.953 [2024-12-09 23:26:35.096207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:44:56.953 [2024-12-09 23:26:35.096324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:44:56.953 [2024-12-09 23:26:35.096375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.158 ms 00:44:56.953 [2024-12-09 23:26:35.096397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:44:56.953 [2024-12-09 23:26:35.097629] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:44:56.953 [2024-12-09 23:26:35.097686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:44:56.953 [2024-12-09 23:26:35.097746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:44:56.953 [2024-12-09 23:26:35.097799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.251 ms 00:44:56.953 [2024-12-09 23:26:35.097821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:44:56.953 [2024-12-09 23:26:35.097884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:44:56.953 [2024-12-09 23:26:35.097910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:44:56.953 [2024-12-09 23:26:35.097956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:44:56.953 [2024-12-09 23:26:35.097977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:44:56.953 [2024-12-09 23:26:35.098030] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 831.851 ms, result 0 00:44:56.953 [2024-12-09 23:26:35.098123] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:44:56.953 [2024-12-09 23:26:35.098157] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:44:56.953 [2024-12-09 23:26:35.098187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:44:56.953 [2024-12-09 23:26:35.098206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:44:56.953 [2024-12-09 23:26:35.098239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1761.489 ms 00:44:56.953 [2024-12-09 23:26:35.098260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:44:56.953 [2024-12-09 23:26:35.098300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:44:56.953 [2024-12-09 23:26:35.098327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:44:56.953 [2024-12-09 23:26:35.098347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:44:56.953 [2024-12-09 23:26:35.098365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:44:56.953 [2024-12-09 23:26:35.109199] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:44:56.953 [2024-12-09 23:26:35.109399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:44:56.953 [2024-12-09 23:26:35.109430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:44:56.953 [2024-12-09 23:26:35.109558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.960 ms 00:44:56.953 [2024-12-09 23:26:35.109580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:44:56.953 [2024-12-09 23:26:35.110271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:44:56.953 [2024-12-09 23:26:35.110307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:44:56.953 [2024-12-09 23:26:35.110403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.609 ms 00:44:56.953 [2024-12-09 23:26:35.110425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:44:56.953 [2024-12-09 23:26:35.112713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:44:56.953 [2024-12-09 23:26:35.112787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:44:56.953 [2024-12-09 23:26:35.112801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.229 ms 00:44:56.953 [2024-12-09 23:26:35.112810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:44:56.953 [2024-12-09 23:26:35.112863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:44:56.953 [2024-12-09 23:26:35.112872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:44:56.953 [2024-12-09 23:26:35.112880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:44:56.953 [2024-12-09 23:26:35.112890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:44:56.953 [2024-12-09 23:26:35.112988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:44:56.953 [2024-12-09 23:26:35.112997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:44:56.953 [2024-12-09 23:26:35.113005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:44:56.953 [2024-12-09 23:26:35.113012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:44:56.953 [2024-12-09 23:26:35.113030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:44:56.953 [2024-12-09 23:26:35.113038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:44:56.953 [2024-12-09 23:26:35.113046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:44:56.953 [2024-12-09 23:26:35.113053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:44:56.953 [2024-12-09 23:26:35.113081] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:44:56.953 [2024-12-09 23:26:35.113090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:44:56.953 [2024-12-09 23:26:35.113098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:44:56.953 [2024-12-09 23:26:35.113105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:44:56.953 [2024-12-09 23:26:35.113112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:44:56.953 [2024-12-09 23:26:35.113160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:44:56.953 [2024-12-09 23:26:35.113169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:44:56.953 [2024-12-09 23:26:35.113177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:44:56.953 [2024-12-09 23:26:35.113184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:44:56.953 [2024-12-09 23:26:35.114034] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 2029.074 ms, result 0 00:44:56.953 [2024-12-09 23:26:35.126345] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:44:56.953 [2024-12-09 23:26:35.142345] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:44:56.953 [2024-12-09 23:26:35.150464] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:44:56.953 Validate MD5 checksum, iteration 1 00:44:56.953 23:26:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:44:56.953 23:26:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:44:56.953 23:26:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:44:56.953 23:26:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:44:56.953 23:26:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:44:56.953 23:26:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:44:56.953 23:26:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:44:56.953 23:26:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:44:56.953 23:26:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:44:56.953 23:26:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:44:56.953 23:26:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:44:56.953 23:26:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:44:56.953 23:26:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:44:56.953 23:26:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:44:56.953 23:26:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:44:56.953 [2024-12-09 23:26:35.250288] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:44:56.954 [2024-12-09 23:26:35.250517] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83063 ] 00:44:56.954 [2024-12-09 23:26:35.411532] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:57.215 [2024-12-09 23:26:35.525788] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:44:59.133  [2024-12-09T23:26:38.169Z] Copying: 519/1024 [MB] (519 MBps) [2024-12-09T23:26:38.169Z] Copying: 1022/1024 [MB] (503 MBps) [2024-12-09T23:26:40.089Z] Copying: 1024/1024 [MB] (average 511 MBps) 00:45:01.627 00:45:01.627 23:26:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:45:01.627 23:26:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:45:03.541 23:26:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:45:03.541 Validate MD5 checksum, iteration 2 00:45:03.541 23:26:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=1689d398e8e87a4d40bc8a8169540af9 00:45:03.541 23:26:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 1689d398e8e87a4d40bc8a8169540af9 != \1\6\8\9\d\3\9\8\e\8\e\8\7\a\4\d\4\0\b\c\8\a\8\1\6\9\5\4\0\a\f\9 ]] 00:45:03.541 23:26:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:45:03.541 23:26:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:45:03.542 23:26:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:45:03.542 23:26:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:45:03.542 23:26:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:45:03.542 23:26:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:45:03.542 23:26:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:45:03.542 23:26:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:45:03.542 23:26:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:45:03.542 [2024-12-09 23:26:41.943525] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:45:03.542 [2024-12-09 23:26:41.943789] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83131 ] 00:45:03.800 [2024-12-09 23:26:42.104268] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:03.800 [2024-12-09 23:26:42.204737] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:45:05.706  [2024-12-09T23:26:44.740Z] Copying: 613/1024 [MB] (613 MBps) [2024-12-09T23:26:46.130Z] Copying: 1024/1024 [MB] (average 567 MBps) 00:45:07.668 00:45:07.668 23:26:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:45:07.668 23:26:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:45:09.568 23:26:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:45:09.568 23:26:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=fdbc617fb55d47adf5f3935a8cd3e637 00:45:09.568 23:26:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ fdbc617fb55d47adf5f3935a8cd3e637 != \f\d\b\c\6\1\7\f\b\5\5\d\4\7\a\d\f\5\f\3\9\3\5\a\8\c\d\3\e\6\3\7 ]] 00:45:09.568 23:26:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:45:09.568 23:26:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:45:09.568 23:26:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:45:09.568 23:26:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:45:09.568 23:26:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:45:09.568 23:26:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:45:09.826 23:26:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:45:09.826 23:26:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:45:09.826 23:26:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:45:09.826 23:26:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:45:09.826 23:26:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 83016 ]] 00:45:09.826 23:26:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 83016 00:45:09.826 23:26:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 83016 ']' 00:45:09.826 23:26:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 83016 00:45:09.826 23:26:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:45:09.826 23:26:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:45:09.826 23:26:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83016 00:45:09.826 killing process with pid 83016 00:45:09.826 23:26:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:45:09.826 23:26:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:45:09.826 23:26:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83016' 00:45:09.826 23:26:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 83016 00:45:09.826 23:26:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 83016 00:45:10.400 [2024-12-09 23:26:48.821779] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:45:10.400 [2024-12-09 23:26:48.836569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:45:10.400 [2024-12-09 23:26:48.836609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:45:10.400 [2024-12-09 23:26:48.836622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:45:10.400 [2024-12-09 23:26:48.836630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:45:10.400 [2024-12-09 23:26:48.836652] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:45:10.400 [2024-12-09 23:26:48.839291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:45:10.400 [2024-12-09 23:26:48.839318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:45:10.400 [2024-12-09 23:26:48.839334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.625 ms 00:45:10.400 [2024-12-09 23:26:48.839341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:45:10.400 [2024-12-09 23:26:48.839577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:45:10.400 [2024-12-09 23:26:48.839587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:45:10.400 [2024-12-09 23:26:48.839595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.215 ms 00:45:10.400 [2024-12-09 23:26:48.839602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:45:10.400 [2024-12-09 23:26:48.841017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:45:10.400 [2024-12-09 23:26:48.841043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:45:10.400 [2024-12-09 23:26:48.841053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.400 ms 00:45:10.400 [2024-12-09 23:26:48.841064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:45:10.400 [2024-12-09 23:26:48.842196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:45:10.400 [2024-12-09 23:26:48.842227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:45:10.400 [2024-12-09 23:26:48.842236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.103 ms 00:45:10.400 [2024-12-09 23:26:48.842245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:45:10.400 [2024-12-09 23:26:48.852739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:45:10.400 [2024-12-09 23:26:48.852772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:45:10.400 [2024-12-09 23:26:48.852783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.443 ms 00:45:10.400 [2024-12-09 23:26:48.852796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:45:10.400 [2024-12-09 23:26:48.858267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:45:10.400 [2024-12-09 23:26:48.858297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:45:10.400 [2024-12-09 23:26:48.858308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.436 ms 00:45:10.400 [2024-12-09 23:26:48.858317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:45:10.400 [2024-12-09 23:26:48.858397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:45:10.400 [2024-12-09 23:26:48.858407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:45:10.400 [2024-12-09 23:26:48.858416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.045 ms 00:45:10.400 [2024-12-09 23:26:48.858429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:45:10.660 [2024-12-09 23:26:48.868487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:45:10.660 [2024-12-09 23:26:48.868517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:45:10.660 [2024-12-09 23:26:48.868528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.039 ms 00:45:10.660 [2024-12-09 23:26:48.868535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:45:10.660 [2024-12-09 23:26:48.878944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:45:10.660 [2024-12-09 23:26:48.878969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:45:10.660 [2024-12-09 23:26:48.878977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.377 ms 00:45:10.660 [2024-12-09 23:26:48.878983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:45:10.660 [2024-12-09 23:26:48.885958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:45:10.660 [2024-12-09 23:26:48.885985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:45:10.660 [2024-12-09 23:26:48.885992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.948 ms 00:45:10.660 [2024-12-09 23:26:48.885998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:45:10.660 [2024-12-09 23:26:48.893139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:45:10.660 [2024-12-09 23:26:48.893166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:45:10.660 [2024-12-09 23:26:48.893173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.075 ms 00:45:10.660 [2024-12-09 23:26:48.893179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:45:10.660 [2024-12-09 23:26:48.893205] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:45:10.660 [2024-12-09 23:26:48.893230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:45:10.660 [2024-12-09 23:26:48.893239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:45:10.660 [2024-12-09 23:26:48.893245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:45:10.660 [2024-12-09 23:26:48.893251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:45:10.660 [2024-12-09 23:26:48.893258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:45:10.660 [2024-12-09 23:26:48.893264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:45:10.660 [2024-12-09 23:26:48.893270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:45:10.660 [2024-12-09 23:26:48.893275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:45:10.660 [2024-12-09 23:26:48.893281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:45:10.660 [2024-12-09 23:26:48.893287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:45:10.660 [2024-12-09 23:26:48.893293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:45:10.660 [2024-12-09 23:26:48.893299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:45:10.660 [2024-12-09 23:26:48.893305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:45:10.660 [2024-12-09 23:26:48.893311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:45:10.660 [2024-12-09 23:26:48.893316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:45:10.660 [2024-12-09 23:26:48.893322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:45:10.660 [2024-12-09 23:26:48.893328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:45:10.660 [2024-12-09 23:26:48.893333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:45:10.660 [2024-12-09 23:26:48.893340] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:45:10.660 [2024-12-09 23:26:48.893359] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: f00f46ed-ee91-4188-a824-8debf9a8e345 00:45:10.660 [2024-12-09 23:26:48.893366] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:45:10.660 [2024-12-09 23:26:48.893372] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:45:10.660 [2024-12-09 23:26:48.893377] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:45:10.660 [2024-12-09 23:26:48.893384] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:45:10.660 [2024-12-09 23:26:48.893389] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:45:10.660 [2024-12-09 23:26:48.893395] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:45:10.660 [2024-12-09 23:26:48.893405] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:45:10.660 [2024-12-09 23:26:48.893410] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:45:10.660 [2024-12-09 23:26:48.893416] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:45:10.660 [2024-12-09 23:26:48.893425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:45:10.660 [2024-12-09 23:26:48.893431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:45:10.660 [2024-12-09 23:26:48.893438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.221 ms 00:45:10.660 [2024-12-09 23:26:48.893444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:45:10.660 [2024-12-09 23:26:48.903199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:45:10.660 [2024-12-09 23:26:48.903236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:45:10.660 [2024-12-09 23:26:48.903244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.740 ms 00:45:10.661 [2024-12-09 23:26:48.903251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:45:10.661 [2024-12-09 23:26:48.903530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:45:10.661 [2024-12-09 23:26:48.903540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:45:10.661 [2024-12-09 23:26:48.903546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.259 ms 00:45:10.661 [2024-12-09 23:26:48.903552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:45:10.661 [2024-12-09 23:26:48.937170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:45:10.661 [2024-12-09 23:26:48.937198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:45:10.661 [2024-12-09 23:26:48.937207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:45:10.661 [2024-12-09 23:26:48.937214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:45:10.661 [2024-12-09 23:26:48.938171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:45:10.661 [2024-12-09 23:26:48.938290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:45:10.661 [2024-12-09 23:26:48.938304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:45:10.661 [2024-12-09 23:26:48.938310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:45:10.661 [2024-12-09 23:26:48.938379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:45:10.661 [2024-12-09 23:26:48.938388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:45:10.661 [2024-12-09 23:26:48.938394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:45:10.661 [2024-12-09 23:26:48.938400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:45:10.661 [2024-12-09 23:26:48.938416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:45:10.661 [2024-12-09 23:26:48.938423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:45:10.661 [2024-12-09 23:26:48.938428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:45:10.661 [2024-12-09 23:26:48.938434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:45:10.661 [2024-12-09 23:26:49.000451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:45:10.661 [2024-12-09 23:26:49.000495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:45:10.661 [2024-12-09 23:26:49.000506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:45:10.661 [2024-12-09 23:26:49.000512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:45:10.661 [2024-12-09 23:26:49.051042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:45:10.661 [2024-12-09 23:26:49.051080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:45:10.661 [2024-12-09 23:26:49.051090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:45:10.661 [2024-12-09 23:26:49.051096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:45:10.661 [2024-12-09 23:26:49.051149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:45:10.661 [2024-12-09 23:26:49.051158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:45:10.661 [2024-12-09 23:26:49.051164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:45:10.661 [2024-12-09 23:26:49.051170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:45:10.661 [2024-12-09 23:26:49.051214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:45:10.661 [2024-12-09 23:26:49.051245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:45:10.661 [2024-12-09 23:26:49.051252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:45:10.661 [2024-12-09 23:26:49.051258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:45:10.661 [2024-12-09 23:26:49.051330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:45:10.661 [2024-12-09 23:26:49.051338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:45:10.661 [2024-12-09 23:26:49.051344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:45:10.661 [2024-12-09 23:26:49.051350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:45:10.661 [2024-12-09 23:26:49.051374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:45:10.661 [2024-12-09 23:26:49.051382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:45:10.661 [2024-12-09 23:26:49.051390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:45:10.661 [2024-12-09 23:26:49.051395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:45:10.661 [2024-12-09 23:26:49.051424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:45:10.661 [2024-12-09 23:26:49.051431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:45:10.661 [2024-12-09 23:26:49.051437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:45:10.661 [2024-12-09 23:26:49.051443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:45:10.661 [2024-12-09 23:26:49.051475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:45:10.661 [2024-12-09 23:26:49.051484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:45:10.661 [2024-12-09 23:26:49.051490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:45:10.661 [2024-12-09 23:26:49.051495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:45:10.661 [2024-12-09 23:26:49.051584] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 215.000 ms, result 0 00:45:11.604 23:26:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:45:11.604 23:26:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:45:11.604 23:26:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:45:11.604 23:26:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:45:11.604 23:26:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:45:11.604 23:26:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:45:11.604 Remove shared memory files 00:45:11.604 23:26:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:45:11.604 23:26:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:45:11.604 23:26:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:45:11.604 23:26:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:45:11.604 23:26:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid82778 00:45:11.604 23:26:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:45:11.604 23:26:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:45:11.604 ************************************ 00:45:11.604 END TEST ftl_upgrade_shutdown 00:45:11.604 ************************************ 00:45:11.604 00:45:11.604 real 1m26.147s 00:45:11.604 user 1m59.078s 00:45:11.604 sys 0m18.859s 00:45:11.604 23:26:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:45:11.604 23:26:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:45:11.604 23:26:49 ftl -- ftl/ftl.sh@80 -- # [[ 0 -eq 1 ]] 00:45:11.604 23:26:49 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:45:11.604 Process with pid 75282 is not found 00:45:11.604 23:26:49 ftl -- ftl/ftl.sh@14 -- # killprocess 75282 00:45:11.604 23:26:49 ftl -- common/autotest_common.sh@954 -- # '[' -z 75282 ']' 00:45:11.604 23:26:49 ftl -- common/autotest_common.sh@958 -- # kill -0 75282 00:45:11.604 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (75282) - No such process 00:45:11.604 23:26:49 ftl -- common/autotest_common.sh@981 -- # echo 'Process with pid 75282 is not found' 00:45:11.604 23:26:49 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:45:11.604 23:26:49 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=83251 00:45:11.604 23:26:49 ftl -- ftl/ftl.sh@20 -- # waitforlisten 83251 00:45:11.604 23:26:49 ftl -- common/autotest_common.sh@835 -- # '[' -z 83251 ']' 00:45:11.604 23:26:49 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:45:11.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:45:11.604 23:26:49 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:45:11.604 23:26:49 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:45:11.604 23:26:49 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:45:11.604 23:26:49 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:45:11.604 23:26:49 ftl -- common/autotest_common.sh@10 -- # set +x 00:45:11.604 [2024-12-09 23:26:49.842780] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:45:11.604 [2024-12-09 23:26:49.843041] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83251 ] 00:45:11.604 [2024-12-09 23:26:50.001152] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:11.865 [2024-12-09 23:26:50.087269] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:45:12.437 23:26:50 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:45:12.437 23:26:50 ftl -- common/autotest_common.sh@868 -- # return 0 00:45:12.437 23:26:50 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:45:12.698 nvme0n1 00:45:12.698 23:26:50 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:45:12.698 23:26:50 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:45:12.698 23:26:50 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:45:12.698 23:26:51 ftl -- ftl/common.sh@28 -- # stores=2409bf62-48e2-4016-99b2-9957d501ec02 00:45:12.698 23:26:51 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:45:12.698 23:26:51 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 2409bf62-48e2-4016-99b2-9957d501ec02 00:45:12.959 23:26:51 ftl -- ftl/ftl.sh@23 -- # killprocess 83251 00:45:12.959 23:26:51 ftl -- common/autotest_common.sh@954 -- # '[' -z 83251 ']' 00:45:12.959 23:26:51 ftl -- common/autotest_common.sh@958 -- # kill -0 83251 00:45:12.959 23:26:51 ftl -- common/autotest_common.sh@959 -- # uname 00:45:12.959 23:26:51 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:45:12.959 23:26:51 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83251 00:45:12.960 killing process with pid 83251 00:45:12.960 23:26:51 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:45:12.960 23:26:51 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:45:12.960 23:26:51 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83251' 00:45:12.960 23:26:51 ftl -- common/autotest_common.sh@973 -- # kill 83251 00:45:12.960 23:26:51 ftl -- common/autotest_common.sh@978 -- # wait 83251 00:45:14.344 23:26:52 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:45:14.344 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:45:14.344 Waiting for block devices as requested 00:45:14.344 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:45:14.607 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:45:14.607 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:45:14.883 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:45:20.172 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:45:20.172 Remove shared memory files 00:45:20.172 23:26:58 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:45:20.172 23:26:58 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:45:20.172 23:26:58 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:45:20.172 23:26:58 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:45:20.172 23:26:58 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:45:20.172 23:26:58 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:45:20.172 23:26:58 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:45:20.172 ************************************ 00:45:20.172 END TEST ftl 00:45:20.172 ************************************ 00:45:20.172 00:45:20.172 real 11m48.831s 00:45:20.172 user 14m10.750s 00:45:20.172 sys 1m5.679s 00:45:20.172 23:26:58 ftl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:45:20.172 23:26:58 ftl -- common/autotest_common.sh@10 -- # set +x 00:45:20.172 23:26:58 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:45:20.172 23:26:58 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:45:20.172 23:26:58 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:45:20.172 23:26:58 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:45:20.172 23:26:58 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:45:20.172 23:26:58 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:45:20.172 23:26:58 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:45:20.172 23:26:58 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:45:20.172 23:26:58 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:45:20.172 23:26:58 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:45:20.172 23:26:58 -- common/autotest_common.sh@726 -- # xtrace_disable 00:45:20.172 23:26:58 -- common/autotest_common.sh@10 -- # set +x 00:45:20.172 23:26:58 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:45:20.172 23:26:58 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:45:20.172 23:26:58 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:45:20.172 23:26:58 -- common/autotest_common.sh@10 -- # set +x 00:45:21.106 INFO: APP EXITING 00:45:21.106 INFO: killing all VMs 00:45:21.106 INFO: killing vhost app 00:45:21.106 INFO: EXIT DONE 00:45:21.677 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:45:21.937 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:45:21.937 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:45:21.937 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:45:21.937 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:45:22.197 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:45:22.766 Cleaning 00:45:22.766 Removing: /var/run/dpdk/spdk0/config 00:45:22.766 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:45:22.766 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:45:22.766 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:45:22.766 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:45:22.766 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:45:22.766 Removing: /var/run/dpdk/spdk0/hugepage_info 00:45:22.766 Removing: /var/run/dpdk/spdk0 00:45:22.766 Removing: /var/run/dpdk/spdk_pid56931 00:45:22.766 Removing: /var/run/dpdk/spdk_pid57133 00:45:22.766 Removing: /var/run/dpdk/spdk_pid57346 00:45:22.766 Removing: /var/run/dpdk/spdk_pid57444 00:45:22.766 Removing: /var/run/dpdk/spdk_pid57478 00:45:22.766 Removing: /var/run/dpdk/spdk_pid57601 00:45:22.766 Removing: /var/run/dpdk/spdk_pid57619 00:45:22.766 Removing: /var/run/dpdk/spdk_pid57812 00:45:22.766 Removing: /var/run/dpdk/spdk_pid57899 00:45:22.766 Removing: /var/run/dpdk/spdk_pid57997 00:45:22.766 Removing: /var/run/dpdk/spdk_pid58108 00:45:22.766 Removing: /var/run/dpdk/spdk_pid58205 00:45:22.766 Removing: /var/run/dpdk/spdk_pid58244 00:45:22.766 Removing: /var/run/dpdk/spdk_pid58281 00:45:22.766 Removing: /var/run/dpdk/spdk_pid58351 00:45:22.766 Removing: /var/run/dpdk/spdk_pid58463 00:45:22.766 Removing: /var/run/dpdk/spdk_pid58910 00:45:22.766 Removing: /var/run/dpdk/spdk_pid58974 00:45:22.766 Removing: /var/run/dpdk/spdk_pid59037 00:45:22.766 Removing: /var/run/dpdk/spdk_pid59059 00:45:22.766 Removing: /var/run/dpdk/spdk_pid59190 00:45:22.766 Removing: /var/run/dpdk/spdk_pid59212 00:45:22.766 Removing: /var/run/dpdk/spdk_pid59349 00:45:22.766 Removing: /var/run/dpdk/spdk_pid59365 00:45:22.766 Removing: /var/run/dpdk/spdk_pid59429 00:45:22.766 Removing: /var/run/dpdk/spdk_pid59447 00:45:22.766 Removing: /var/run/dpdk/spdk_pid59511 00:45:22.766 Removing: /var/run/dpdk/spdk_pid59529 00:45:22.766 Removing: /var/run/dpdk/spdk_pid59724 00:45:22.766 Removing: /var/run/dpdk/spdk_pid59755 00:45:22.766 Removing: /var/run/dpdk/spdk_pid59844 00:45:22.766 Removing: /var/run/dpdk/spdk_pid60027 00:45:22.766 Removing: /var/run/dpdk/spdk_pid60117 00:45:22.766 Removing: /var/run/dpdk/spdk_pid60163 00:45:22.766 Removing: /var/run/dpdk/spdk_pid60634 00:45:22.766 Removing: /var/run/dpdk/spdk_pid60732 00:45:22.766 Removing: /var/run/dpdk/spdk_pid60842 00:45:22.766 Removing: /var/run/dpdk/spdk_pid60895 00:45:22.766 Removing: /var/run/dpdk/spdk_pid60926 00:45:22.766 Removing: /var/run/dpdk/spdk_pid61010 00:45:22.766 Removing: /var/run/dpdk/spdk_pid61642 00:45:22.766 Removing: /var/run/dpdk/spdk_pid61678 00:45:22.766 Removing: /var/run/dpdk/spdk_pid62186 00:45:22.766 Removing: /var/run/dpdk/spdk_pid62284 00:45:22.766 Removing: /var/run/dpdk/spdk_pid62399 00:45:22.766 Removing: /var/run/dpdk/spdk_pid62452 00:45:22.766 Removing: /var/run/dpdk/spdk_pid62477 00:45:22.766 Removing: /var/run/dpdk/spdk_pid62503 00:45:22.766 Removing: /var/run/dpdk/spdk_pid64371 00:45:22.766 Removing: /var/run/dpdk/spdk_pid64502 00:45:22.766 Removing: /var/run/dpdk/spdk_pid64512 00:45:22.766 Removing: /var/run/dpdk/spdk_pid64524 00:45:22.766 Removing: /var/run/dpdk/spdk_pid64563 00:45:22.766 Removing: /var/run/dpdk/spdk_pid64567 00:45:22.766 Removing: /var/run/dpdk/spdk_pid64579 00:45:22.766 Removing: /var/run/dpdk/spdk_pid64624 00:45:22.766 Removing: /var/run/dpdk/spdk_pid64628 00:45:22.766 Removing: /var/run/dpdk/spdk_pid64640 00:45:22.766 Removing: /var/run/dpdk/spdk_pid64685 00:45:22.766 Removing: /var/run/dpdk/spdk_pid64689 00:45:22.766 Removing: /var/run/dpdk/spdk_pid64701 00:45:22.766 Removing: /var/run/dpdk/spdk_pid66081 00:45:22.766 Removing: /var/run/dpdk/spdk_pid66178 00:45:22.766 Removing: /var/run/dpdk/spdk_pid67582 00:45:22.766 Removing: /var/run/dpdk/spdk_pid69321 00:45:22.766 Removing: /var/run/dpdk/spdk_pid69390 00:45:22.766 Removing: /var/run/dpdk/spdk_pid69465 00:45:22.766 Removing: /var/run/dpdk/spdk_pid69575 00:45:22.766 Removing: /var/run/dpdk/spdk_pid69661 00:45:22.766 Removing: /var/run/dpdk/spdk_pid69768 00:45:22.766 Removing: /var/run/dpdk/spdk_pid69837 00:45:22.766 Removing: /var/run/dpdk/spdk_pid69912 00:45:22.766 Removing: /var/run/dpdk/spdk_pid70016 00:45:22.766 Removing: /var/run/dpdk/spdk_pid70108 00:45:22.766 Removing: /var/run/dpdk/spdk_pid70208 00:45:22.766 Removing: /var/run/dpdk/spdk_pid70272 00:45:22.766 Removing: /var/run/dpdk/spdk_pid70348 00:45:22.766 Removing: /var/run/dpdk/spdk_pid70457 00:45:22.766 Removing: /var/run/dpdk/spdk_pid70544 00:45:22.766 Removing: /var/run/dpdk/spdk_pid70644 00:45:22.766 Removing: /var/run/dpdk/spdk_pid70713 00:45:22.766 Removing: /var/run/dpdk/spdk_pid70788 00:45:22.766 Removing: /var/run/dpdk/spdk_pid70892 00:45:22.766 Removing: /var/run/dpdk/spdk_pid70984 00:45:22.766 Removing: /var/run/dpdk/spdk_pid71075 00:45:22.766 Removing: /var/run/dpdk/spdk_pid71148 00:45:22.766 Removing: /var/run/dpdk/spdk_pid71217 00:45:22.766 Removing: /var/run/dpdk/spdk_pid71291 00:45:22.766 Removing: /var/run/dpdk/spdk_pid71365 00:45:22.766 Removing: /var/run/dpdk/spdk_pid71474 00:45:22.766 Removing: /var/run/dpdk/spdk_pid71559 00:45:22.766 Removing: /var/run/dpdk/spdk_pid71654 00:45:22.766 Removing: /var/run/dpdk/spdk_pid71717 00:45:22.766 Removing: /var/run/dpdk/spdk_pid71797 00:45:23.028 Removing: /var/run/dpdk/spdk_pid71871 00:45:23.028 Removing: /var/run/dpdk/spdk_pid71940 00:45:23.028 Removing: /var/run/dpdk/spdk_pid72043 00:45:23.028 Removing: /var/run/dpdk/spdk_pid72134 00:45:23.028 Removing: /var/run/dpdk/spdk_pid72283 00:45:23.028 Removing: /var/run/dpdk/spdk_pid72557 00:45:23.028 Removing: /var/run/dpdk/spdk_pid72588 00:45:23.028 Removing: /var/run/dpdk/spdk_pid73030 00:45:23.028 Removing: /var/run/dpdk/spdk_pid73215 00:45:23.028 Removing: /var/run/dpdk/spdk_pid73318 00:45:23.028 Removing: /var/run/dpdk/spdk_pid73433 00:45:23.028 Removing: /var/run/dpdk/spdk_pid73477 00:45:23.028 Removing: /var/run/dpdk/spdk_pid73503 00:45:23.028 Removing: /var/run/dpdk/spdk_pid73810 00:45:23.028 Removing: /var/run/dpdk/spdk_pid73864 00:45:23.028 Removing: /var/run/dpdk/spdk_pid73937 00:45:23.028 Removing: /var/run/dpdk/spdk_pid74331 00:45:23.028 Removing: /var/run/dpdk/spdk_pid74481 00:45:23.028 Removing: /var/run/dpdk/spdk_pid75282 00:45:23.028 Removing: /var/run/dpdk/spdk_pid75417 00:45:23.028 Removing: /var/run/dpdk/spdk_pid75642 00:45:23.028 Removing: /var/run/dpdk/spdk_pid75734 00:45:23.028 Removing: /var/run/dpdk/spdk_pid76020 00:45:23.028 Removing: /var/run/dpdk/spdk_pid76272 00:45:23.028 Removing: /var/run/dpdk/spdk_pid76609 00:45:23.028 Removing: /var/run/dpdk/spdk_pid76788 00:45:23.028 Removing: /var/run/dpdk/spdk_pid76907 00:45:23.028 Removing: /var/run/dpdk/spdk_pid76964 00:45:23.028 Removing: /var/run/dpdk/spdk_pid77115 00:45:23.028 Removing: /var/run/dpdk/spdk_pid77140 00:45:23.028 Removing: /var/run/dpdk/spdk_pid77205 00:45:23.028 Removing: /var/run/dpdk/spdk_pid77441 00:45:23.028 Removing: /var/run/dpdk/spdk_pid77664 00:45:23.028 Removing: /var/run/dpdk/spdk_pid78251 00:45:23.028 Removing: /var/run/dpdk/spdk_pid79214 00:45:23.028 Removing: /var/run/dpdk/spdk_pid79732 00:45:23.028 Removing: /var/run/dpdk/spdk_pid80109 00:45:23.028 Removing: /var/run/dpdk/spdk_pid80242 00:45:23.028 Removing: /var/run/dpdk/spdk_pid80318 00:45:23.028 Removing: /var/run/dpdk/spdk_pid80695 00:45:23.028 Removing: /var/run/dpdk/spdk_pid80749 00:45:23.028 Removing: /var/run/dpdk/spdk_pid81236 00:45:23.028 Removing: /var/run/dpdk/spdk_pid81601 00:45:23.028 Removing: /var/run/dpdk/spdk_pid82216 00:45:23.028 Removing: /var/run/dpdk/spdk_pid82360 00:45:23.028 Removing: /var/run/dpdk/spdk_pid82402 00:45:23.028 Removing: /var/run/dpdk/spdk_pid82466 00:45:23.028 Removing: /var/run/dpdk/spdk_pid82516 00:45:23.028 Removing: /var/run/dpdk/spdk_pid82569 00:45:23.028 Removing: /var/run/dpdk/spdk_pid82778 00:45:23.028 Removing: /var/run/dpdk/spdk_pid82871 00:45:23.028 Removing: /var/run/dpdk/spdk_pid82938 00:45:23.028 Removing: /var/run/dpdk/spdk_pid83016 00:45:23.028 Removing: /var/run/dpdk/spdk_pid83063 00:45:23.028 Removing: /var/run/dpdk/spdk_pid83131 00:45:23.028 Removing: /var/run/dpdk/spdk_pid83251 00:45:23.028 Clean 00:45:23.028 23:27:01 -- common/autotest_common.sh@1453 -- # return 0 00:45:23.028 23:27:01 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:45:23.028 23:27:01 -- common/autotest_common.sh@732 -- # xtrace_disable 00:45:23.028 23:27:01 -- common/autotest_common.sh@10 -- # set +x 00:45:23.028 23:27:01 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:45:23.028 23:27:01 -- common/autotest_common.sh@732 -- # xtrace_disable 00:45:23.028 23:27:01 -- common/autotest_common.sh@10 -- # set +x 00:45:23.289 23:27:01 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:45:23.289 23:27:01 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:45:23.289 23:27:01 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:45:23.289 23:27:01 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:45:23.289 23:27:01 -- spdk/autotest.sh@398 -- # hostname 00:45:23.289 23:27:01 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:45:23.289 geninfo: WARNING: invalid characters removed from testname! 00:45:49.856 23:27:26 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:45:51.801 23:27:30 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:45:53.750 23:27:31 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:45:56.288 23:27:34 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:45:58.197 23:27:36 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:46:00.109 23:27:38 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:46:02.016 23:27:40 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:46:02.016 23:27:40 -- spdk/autorun.sh@1 -- $ timing_finish 00:46:02.016 23:27:40 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:46:02.016 23:27:40 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:46:02.016 23:27:40 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:46:02.016 23:27:40 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:46:02.016 + [[ -n 5024 ]] 00:46:02.016 + sudo kill 5024 00:46:02.027 [Pipeline] } 00:46:02.043 [Pipeline] // timeout 00:46:02.048 [Pipeline] } 00:46:02.063 [Pipeline] // stage 00:46:02.069 [Pipeline] } 00:46:02.083 [Pipeline] // catchError 00:46:02.093 [Pipeline] stage 00:46:02.095 [Pipeline] { (Stop VM) 00:46:02.108 [Pipeline] sh 00:46:02.393 + vagrant halt 00:46:04.938 ==> default: Halting domain... 00:46:11.570 [Pipeline] sh 00:46:11.850 + vagrant destroy -f 00:46:14.391 ==> default: Removing domain... 00:46:14.669 [Pipeline] sh 00:46:14.955 + mv output /var/jenkins/workspace/nvme-vg-autotest_3/output 00:46:14.965 [Pipeline] } 00:46:14.981 [Pipeline] // stage 00:46:14.986 [Pipeline] } 00:46:15.001 [Pipeline] // dir 00:46:15.005 [Pipeline] } 00:46:15.018 [Pipeline] // wrap 00:46:15.022 [Pipeline] } 00:46:15.034 [Pipeline] // catchError 00:46:15.043 [Pipeline] stage 00:46:15.044 [Pipeline] { (Epilogue) 00:46:15.052 [Pipeline] sh 00:46:15.334 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:46:20.627 [Pipeline] catchError 00:46:20.629 [Pipeline] { 00:46:20.643 [Pipeline] sh 00:46:20.930 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:46:20.930 Artifacts sizes are good 00:46:20.940 [Pipeline] } 00:46:20.954 [Pipeline] // catchError 00:46:20.964 [Pipeline] archiveArtifacts 00:46:20.972 Archiving artifacts 00:46:21.066 [Pipeline] cleanWs 00:46:21.079 [WS-CLEANUP] Deleting project workspace... 00:46:21.079 [WS-CLEANUP] Deferred wipeout is used... 00:46:21.087 [WS-CLEANUP] done 00:46:21.089 [Pipeline] } 00:46:21.104 [Pipeline] // stage 00:46:21.109 [Pipeline] } 00:46:21.123 [Pipeline] // node 00:46:21.128 [Pipeline] End of Pipeline 00:46:21.177 Finished: SUCCESS